The phrase "Root of Trust" turns up at various points in discussions about verified boot and measured boot, and to a first approximation nobody is able to give you a coherent explanation of what it means[1]. The Trusted Computing Group has a fairly wordy definition, but (a) it's a lot of words and (b) I don't like it, so instead I'm going to start by defining a root of trust as "A thing that has to be trustworthy for anything else on your computer to be trustworthy".
(An aside: when I say "trustworthy", it is very easy to interpret this in a cynical manner and assume that "trust" means "trusted by someone I do not necessarily trust to act in my best interest". I want to be absolutely clear that when I say "trustworthy" I mean "trusted by the owner of the computer", and that as far as I'm concerned selling devices that do not allow the owner to define what's trusted is an extremely bad thing in the general case)
Let's take an example. In verified boot, a cryptographic signature of a component is verified before it's allowed to boot. A straightforward implementation of a verified boot implementation has the firmware verify the signature on the bootloader or kernel before executing it. In this scenario, the firmware is the root of trust - it's the first thing that makes a determination about whether something should be allowed to run or not[2]. As long as the firmware behaves correctly, and as long as there aren't any vulnerabilities in our boot chain, we know that we booted an OS that was signed with a key we trust.
But what guarantees that the firmware behaves correctly? What if someone replaces our firmware with firmware that trusts different keys, or hot-patches the OS as it's booting it? We can't just ask the firmware whether it's trustworthy - trustworthy firmware will say yes, but the thing about malicious firmware is that it can just lie to us (either directly, or by modifying the OS components it boots to lie instead). This is probably not sufficiently trustworthy!
Ok, so let's have the firmware be verified before it's executed. On Intel this is "Boot Guard", on AMD this is "Platform Secure Boot", everywhere else it's just "Secure Boot". Code on the CPU (either in ROM or signed with a key controlled by the CPU vendor) verifies the firmware[3] before executing it. Now the CPU itself is the root of trust, and, well, that seems reasonable - we have to place trust in the CPU, otherwise we can't actually do computing. We can now say with a reasonable degree of confidence (again, in the absence of vulnerabilities) that we booted an OS that we trusted. Hurrah!
Except. How do we know that the CPU actually did that verification? CPUs are generally manufactured without verification being enabled - different system vendors use different signing keys, so those keys can't be installed in the CPU at CPU manufacture time, and vendors need to do code development without signing everything so you can't require that keys be installed before a CPU will work. So, out of the box, a new CPU will boot anything without doing verification[4], and development units will frequently have no verification.
As a device owner, how do you tell whether or not your CPU has this verification enabled? Well, you could ask the CPU, but if you're doing that on a device that booted a compromised OS then maybe it's just hotpatching your OS so when you do that you just get RET_TRUST_ME_BRO even if the CPU is desperately waving its arms around trying to warn you it's a trap. This is, unfortunately, a problem that's basically impossible to solve using verified boot alone - if any component in the chain fails to enforce verification, the trust you're placing in the chain is misplaced and you are going to have a bad day.
So how do we solve it? The answer is that we can't simply ask the OS, we need a mechanism to query the root of trust itself. There's a few ways to do that, but fundamentally they depend on the ability of the root of trust to provide proof of what happened. This requires that the root of trust be able to sign (or cause to be signed) an "attestation" of the system state, a cryptographically verifiable representation of the security-critical configuration and code. The most common form of this is called "measured boot" or "trusted boot", and involves generating a "measurement" of each boot component or configuration (generally a cryptographic hash of it), and storing that measurement somewhere. The important thing is that it must not be possible for the running OS (or any pre-OS component) to arbitrarily modify these measurements, since otherwise a compromised environment could simply go back and rewrite history. One frequently used solution to this is to segregate the storage of the measurements (and the attestation of them) into a separate hardware component that can't be directly manipulated by the OS, such as a Trusted Platform Module. Each part of the boot chain measures relevant security configuration and the next component before executing it and sends that measurement to the TPM, and later the TPM can provide a signed attestation of the measurements it was given. So, an SoC that implements verified boot should create a measurement telling us whether verification is enabled - and, critically, should also create a measurement if it isn't. This is important because failing to measure the disabled state leaves us with the same problem as before; someone can replace the mutable firmware code with code that creates a fake measurement asserting that verified boot was enabled, and if we trust that we're going to have a bad time.
(Of course, simply measuring the fact that verified boot was enabled isn't enough - what if someone replaces the CPU with one that has verified boot enabled, but trusts keys under their control? We also need to measure the keys that were used in order to ensure that the device trusted only the keys we expected, otherwise again we're going to have a bad time)
So, an effective root of trust needs to:
1) Create a measurement of its verified boot policy before running any mutable code 2) Include the trusted signing key in that measurement 3) Actually perform that verification before executing any mutable code
and from then on we're in the hands of the verified code actually being trustworthy, and it's probably written in C so that's almost certainly false, but let's not try to solve every problem today.
Does anything do this today? As far as I can tell, Intel's Boot Guard implementation does. Based on publicly available documentation I can't find any evidence that AMD's Platform Secure Boot does (it does the verification, but it doesn't measure the policy beforehand, so it seems spoofable), but I could be wrong there. I haven't found any general purpose non-x86 parts that do, but this is in the realm of things that SoC vendors seem to believe is some sort of value-add that can only be documented under NDAs, so please do prove me wrong. And then there are add-on solutions like Titan, where we delegate the initial measurement and validation to a separate piece of hardware that measures the firmware as the CPU reads it, rather than requiring that the CPU do it.
But, overall, the situation isn't great. On many platforms there's simply no way to prove that you booted the code you expected to boot. People have designed elaborate security implementations that can be bypassed in a number of ways.
[1] In this respect it is extremely similar to "Zero Trust" [2] This is a bit of an oversimplification - once we get into dynamic roots of trust like Intel's TXT this story gets more complicated, but let's stick to the simple case today [3] I'm kind of using "firmware" in an x86ish manner here, so for embedded devices just think of "firmware" as "the first code executed out of flash and signed by someone other than the SoC vendor" [4] In the Intel case this isn't strictly true, since the keys are stored in the motherboard chipset rather than the CPU, and so taking a board with Boot Guard enabled and swapping out the CPU won't disable Boot Guard because the CPU reads the configuration from the chipset. But many mobile Intel parts have the chipset in the same package as the CPU, so in theory swapping out that entire package would disable Boot Guard. I am not good enough at soldering to demonstrate that.
PLIO
I have been looking for an image viewer that can view images via modification date by default. The newer, the better. Alas, most of the image viewers do not do that. Even feh somehow fails. What I need is default listing of images as thumbnails by modification date. I put it up on Unix Stackexchange couple of years ago. Somebody shared ristretto but that just gives listing and doesn t give the way I want it. To be more illustrative, maybe this may serve as a guide to what I mean.
There is an RFP for it. While playing with it, I also discovered another benefit of the viewer, a sort of side-benefit, it tells you if any images have gone corrupt or whatever and you get that info. on the CLI so you can try viewing that image with the path using another viewer or viewers before deleting them. One of the issues is there doesn t seem to be a magnify option by default. While the documentation says use the ^ key to maximize it, it doesn t maximize. Took me a while to find it as that isn t a key that I use most of the time. Ironically, that is the key used on the mobile quite a bit. Anyways, so that needs to be fixed. Sadly, it doesn t have creation date or modification date sort, although the documentation does say it does (at least the modification date) but it doesn t show at my end. I also got Warning: UNKNOWN command detected! but that doesn t tell me enough as to what the issue is. Hopefully the developer will fix the issues and it will become part of Debian as many such projects are. Compiling was dead easy even with gcc-12 once I got freeimage-dev.
Mum s first death anniversary
I do not know where the year went by or how. The day went in a sort of suspended animation. The only thing I did was eat and sleep that day, didn t feel like doing anything. Old memories, even dreams of fighting with her only to realize in the dream itself it s fake, she isn t there anymore Something that can never be fixed
Debconf Kochi
I should have shared it few days ago but somehow slipped my mind. While it s too late for most people to ask for bursary for Debconf Kochi, if you are anywhere near Kochi in the month of September between the dates. September 3 to September 17 nearby Infopark, Kochi you could walk in and talk to people. This would be for people who either have an interest in free software, FOSS or Debian specific. For those who may not know, while Debian is a Linux Distribution having ports to other kernels as well as well as hardware. While I may not be able to provide the list of all the various flavors as well as hardware, can say it is quite a bit. For e.g. there is a port to RISC-V that was done few years back (2018). Why that is needed will be shared below. There is always something new to look forward in a Debconf.
Pressure Cooker and Potatoes
This was asked to me in the last Debconf (2016) by few people. So as people are coming to India, it probably is a good time to sort of reignite the topic :). So a Pressure Cooker boils your veggies and whatnot while still preserving the nutrients. While there are quite a number of brands I would suggest either Prestige or Hawkins, I have had good experience with both. There are also some new pressure cookers that have come that are somewhat in the design of the Thai Wok. So if that is something that you are either comfortable with or looking for, you could look at that. One of the things that you have to be sort of aware of and be most particular is the pressure safety valve. Just putting up pressure cooker safety valve in your favorite search-engine should show you different makes and whatnot. While they are relatively cheap, you need to see it is not cracked, used or whatever. The other thing is the Pressure Cooker whistle as well. The easiest thing to cook are mashed potatoes in a pressure cooker. A pressure Cooker comes in Litres, from 1 Ltr. to 20 Ltr. The larger ones are obviously for hotels or whatnot. General rule of using Pressure cooker is have water 1/4th, whatever vegetable or non-veg you want to boil 1/2 and let the remaining part for the steam. Now the easiest thing to do is have wash the potatoes and put 1/4th water of the pressure cooker. Then put 1/2 or less or little bit more of the veggies, in this instance just Potatoes. You can put salt to or that can be done later. The taste will be different. Also, there are various salts so won t really go into it as spices is a rabbit hole. Anyways, after making sure that there is enough space for the steam to be built, Put the handle on the cooker and basically wait for 5-10 minutes for the pressure to be built. You will hear a whistling sound, wait for around 5 minutes or a bit more (depends on many factors, kind of potatoes, weather etc.) and then just let it cool off naturally. After 5-10 minutes or a bit more, the pressure will be off. Your mashed potatoes are ready for either consumption or for further processing. I am assuming gas, induction cooking will have its own temperature, have no idea about it, hence not sharing that. Pressure Cooker, first put on the heaviest settings, once it starts to whistle, put it on medium for 5-10 minutes and then let it cool off. The first time I had tried that, I burned the cooker. You understand things via trial and error.
Poha recipe
This is a nice low-cost healthy and fulfilling breakfast called Poha that can be made anytime and requires at the most 10-15 minutes to prepare with minimal fuss. The main ingredient is Poha or flattened rice. So how is it prepared. I won t go into the details of quantity as that is upto how hungry people are. There are various kinds of flattened rice available in the market, what you are looking for is called thick Poha or zhad Poha (in Marathi). The first step is the trickiest. What do you want to do is put water on Poha but not to let it be soggy. There is an accessory similar to tea filter but forgot the name, it basically drains all the extra moisture and you want Poha to be a bit fluffy and not soggy. The Poha should breathe for about 5 minutes before being cooked. To cook, use a heavy bottomed skillet, put some oil in it, depends on what oil you like, again lot of variations, you can use ground nut or whatever oil you prefer. Then use single mustard seeds to check temperature of the oil. Once the mustard seeds starts to pop, it means it s ready for things. So put mustard seeds in, finely chopped onion, finely chopped coriander leaves, a little bit of lemon juice, if you want potatoes, then potatoes too. Be aware that Potatoes will soak oil like anything, so if you are going to have potatoes than the oil should be a bit more. Some people love curry leaves, others don t. I like them quite a bit, it gives a slightly different taste. So the order is
Oil
Mustard seeds (1-2 teaspoon)
Curry leaves 5-10
Onion (2-3 medium onions finely chopped, onion can also be used as garnish.)
Potatoes (2-3 medium ones, mashed)
Small green chillies or 1-2 Red chillies (if you want)
Coriander Leaves (one bunch finely chopped)
Peanuts (half a glass)
Make sure that you are stirring them quite a bit. On a good warm skillet, this should hardly take 5 minutes. Once the onions are slighly brown, you are ready to put Poha in. So put the poha, add turmeric, salt, and sugar. Again depends on number of people. If I made for myself and mum, usually did 1 teaspoon of salt, not even one fourth of turmeric, just a hint, it is for the color, 1 to 2 teapoons of sugar and mix them all well at medium flame. Poha used to be two or three glasses.
If you don t want potato, you can fry them a bit separately and garnish with it, along with coriander, coconut and whatnot. In Kerala, there is possibility that people might have it one day or all days. It serves as a snack at anytime, breakfast, lunch, tea time or even dinner if people don t want to be heavy. The first few times I did, I did manage to do everything wrong. So, if things go wrong, let it be. After a while, you will find your own place. And again, this is just one way, I m sure this can be made as elaborate a meal as you want. This is just something you can do if you don t want noodles or are bored with it. The timing is similar.
While I don t claim to be an expert in cooking in anyway or form, if people have questions feel free to ask. If you are single or two people, 2 Ltr. Pressure cooker is enough for most Indians, Westerners may take a slightly bit larger Pressure Cooker, maybe a 3 Ltr. one may be good for you. Happy Cooking and Experimenting
I have had the pleasure to have Poha in many ways. One of my favorite ones is when people have actually put tadka on top of Poha. You do everything else but in a slight reverse order. The tadka has all the spices mixed and is concentrated and is put on top of Poha and then mixed. Done right, it tastes out of this world. For those who might not have had the Indian culinary experience, most of which is actually borrowed from the Mughals, you are in for a treat.
One of the other things I would suggest to people is to ask people where there can get five types of rice. This is a specialty of South India and a sort of street food. I know where you can get it Hyderabad, Bangalore, Chennai but not in Kerala, although am dead sure there is, just somehow have missed it. If asked, am sure the Kerala team should be able to guide.
That s all for now, feeling hungry, having dinner as have been sharing about cooking.
RISC-V
There has been lot of conversations about how India could be in the microprocesor spacee. The x86 and x86-64 is all tied up in Intel and AMD so that s a no go area. Let me elaborate a bit why I say that. While most of the people know that IBM was the first producers of transistors as well as microprocessors. Coincidentally, AMD and Intel story are similar in some aspects but not in others. For a long time Intel was a market leader and by hook or crook it remained a market leader. One of the more interesting companies in the 1980s was Cyrix which sold lot of low-end microprocessors. A lot of that technology also went into Via which became a sort of spiritual successor of Cyrix. It is because of Cyrix and Via that Intel was forced to launch the Celeron model of microprocessors.
Lawsuits, European Regulation
For those who have been there in the 1990s may have heard the term Wintel that basically meant Microsoft Windows and Intel and they had a sort of monopoly power. While the Americans were sorta ok with it, the Europeans were not and time and time again they forced both Microsoft as well as Intel to provide alternatives. The pushback from the regulators was so great that Intel funded AMD to remain solvent for few years. The successes that we see today from AMD is Lisa Su s but there is a whole lot of history as well as bad blood between the two companies. Lot of lawsuits and whatnot. Lot of cross-licensing agreements between them as well. So for any new country it would need lot of cash just for licensing all the patents there are and it s just not feasible for any newcomer to come in this market as they would have to fork the cash for the design apart from manufacturing fab.
ARM
Most of the mobiles today sport an ARM processor. At one time it meant Advanced RISC Machines but now goes by Arm Ltd. Arm itself licenses its designs and while there are lot of customers, you are depending on ARM and they can change any of the conditions anytime they want. You are also hoping that ARM does not steal your design or do anything else with it. And while people trust ARM, it is still a risk if you are a company.
RISC and Shakti
There is not much to say about RISC other than this article at Register. While India does have large ambitions, executing it is far trickier than most people believe as well as complex and highly capital intensive. The RISC way could be a game-changer provided India moves deftly ahead. FWIW, Debian did a RISC port in 2018. From what I can tell, you can install it on a VM/QEMU and do stuff. And while RISC has its own niches, you never know what happens next.One can speculate a lot and there is certainly a lot of momentum behind RISC. From what little experience I have had, where India has failed time and time again, whether in software or hardware is support. Support is the key, unless that is not fixed, it will remain a dream
On a slightly sad note, Foxconn is withdrawing from the joint venture it had with Vedanta.
Motherboard Battery
You know you have become too old when you get stumped and the solution is simple and fixed by the vendor. About a week back, I was getting CPU Fan Error. It s a 6 year old desktop so I figured that the fan or the ball bearings on the fan must have worn out. I opened up the cabinet and I could see both the on cpu fan was working coolly as well as the side fan was working without an issue. So I couldn t figure out what was the issue. I had updated the BIOS/UEFI number of years ago so that couldn t be an issue. I fiddled with the boot menu and was able to boot into Linux but it was a pain that I had to do every damn time. As it is, it takes almost 2-3 minutes for the whole desktop to be ready and this extra step was annoying. I had bought a Mid-tower cabinet while the motherboard so there were alternate connectors I could try but still the issue persisted. And this workaround was heart-breaking as you boot the BIOS/UEFI and fix the boot menu each time even though it had Debian Boot Launcher and couple of virtual ones provided by the vendor (Asus) and they were hardwired. So failing all, went to my vendor/support and asked if he could find out what the issue is. It costed me $10, he did all the same things I did but one thing more, he changed the battery (cost less than 1USD) and presto all was right with the world again. I felt like a fool but a deal is a deal so paid the gentleman for his services. Now can again use the desktop and at least know about what s happening in the outside world.
Framework Laptops
I have been seeing quite a few teardowns of Framework Laptops on Youtube and love it. More so, now that they have AMD in their arsenal. I do hope they work on their pricing and logistics and soon we have it here competing with others. If the pricing isn t substantial then definitely would be one of the first ones to order. India is and remains a very cost-conscious market and more so with the runaway prices that we have been seeing. In fact, the last 3 years have been pretty bad for the overall PC market declining 30% YoY for the last 3 years while prices have gone through the roof. Apart from the pricing from the vendors, taxation has been another hit as the current Govt. has been taxing anywhere from 30-100% taxes on various PC, desktop and laptop components. Think have shared Graphic cards for instance have 100% Duty apart from other taxes. I don t see the market picking up at least in the 24 to 36 months. Most of this year and next year, both AMD and Intel are doing refreshes so while there would be some improvements (probably 10-15%) not earth-shattering for the wider market to sit up and take notice. Intel has proposed a 64-bit architecture (only) about couple of months back, more on that later. As far as the Indian market is concerned, if you want the masses, then have lappies at around 40-50k ($600 USD) and there would be a mass takeup, if you want to be a Lenovo or something like that, then around a lakh or INR 100k ($1200 USD) or be an Apple which is around 150k INR or around 2000 USD. There are some clues as to what their plans but for that you have to trawl their forums and the knowledgebase. Seems some people are using freight forwarders to get around the hurdles but Framework doesn t want to do any shortcuts for the same.
Everybody seems to be working on Vertical stacking of chips, whether it is the Chinese, or Belgian s or even AMD and Intel who have their own spins to it, but most of these technologies are at least 3-4 years out in the future (or more). India is a big laggard in this space with having knowledge of 45nm which in Aviation speak one could say India knows how to build 707 (one of the first Boeing commercial passenger carrying aircraft) while today it is Boeing 777x or Airbus 350. I have shared in the past how the Tata s have been trying to collaborate with the Japanese and get at least their 25nm chip technology but nothing has come of it to date. The only somewhat o.k. news has been the chip testing and packaging plant by Micron to be made in Gujarat. It doesn t do anything for us although we would be footing almost 70% of the plant s capital expenditure and the maximum India will get 4k jobs. Most of these plants are highly automated as dust is their mortal enemy so even the 4k jobs announced seem far-fetched. It would probably be less than half once production starts if it happens but that is probably a story for another time. Just as a parting shot, even memory vendors are going highly automated factory lines.
VR Headsets
I was stuck by how similar or where VR is when I was watching Made in Finland. I don t want to delve much into the series but it is a fascinating one. I was very much taken by the character of Kari Kairamo or the actor who played the character of him and was very much disappointed with the sad ending the gentleman got. It is implicated in the series that the banks implicitly forced him to commit suicide. There is also a lot of chaos as is normal in a big company having many divisions. It s only when Jorma Olila takes over, the company sheds a lot of dead weight was cut off with mobiles having the most funding which they didn t have before.
I was also fascinated when I experienced pride when Nokia shows off its 1011 mobile phone when at that time phones were actually like bricks. My first Nokia was number of years later, Nokia 1800 and have to say those phones outlasted a long time than today s Samsung s. If only Nokia had read the tea leaves right Back to the topic though, I have been wearing glasses since the age of 5 year old. They weigh less than 10 grams and you still get a nose dent. And I know enough people, times etc. when people have got headaches and whatnot from glasses. Unless the VR headsets become that size and don t cost an arm and leg (or a kidney or a liver) it would have niche use. While 5G and 6G would certainly push more ppl to get it it probably would take a few more years before we have something that is simple and doesn t need too much to get it rolling.
The series I mentioned above is already over it s first season but would highly recommend it. I do hope the second season happens quickly and we do come to know why and how Nokia missed the Android train and their curious turn to get to Microsoft which sorta sealed their fate
Steam
I have been following Steam, Luthris and plenty of other launchers on Debian. There also seems to some sort of an idea that once MESA 23.1.x or later comes into Debian at some point we may get Steam 64-bit and some people are hopeful that we may get it by year-end. There are a plethora of statistics that can be used to find status of Gaming on Linux. This is perhaps the best one I got so far. Valve also has its own share of stats that it shows here. I am not going to go into much detail except the fact that lutris has been there on Debian sometime now. And as and when Steam does go fully 64-bit, whole lot of multilib issues could be finally put to rest. Interestingly, Intel has quietly also shared details of only a 64-bit architecture PC. From what I could tell, it simply boots into 16-bit and then goes into 64-bit bypassing the 32-bit. In theory, it should remove whole lot code, make it safer as well as faster. If rival AMD was to play along things could move much faster. Now don t get me wrong, 32-bit was good, but for it s time. I m sure at some point in time even 64-bit would have its demise, and we would jump to 128-bit. Of course, in reality we aren t anywhere close to even 48-bit, leave alone 64-bit. Superuser gives a good answer on that. We may be a decade or more before we exhaust that but for sure there will be need for better, faster hardware especially as we use more and more of AI for good and bad things. I am curious to see how it pans out and how it will affect (or not) FOSS gaming. FWIW, I used to peruse freegamer.blogspot.com which kinda ended in 2021 and now use Lee Reilly blog posts to know what is happening in github as far as FOSS games are concerned. There is also a whole thing about handhelds and gaming but that probably would require its own blog post or two. There are just too many while at the same time too few (legally purchasable in India) to have its own blog post, maybe sometime in Future. Best way to escape the world. Till later.
Release 0.6.32 of the digest package
arrived at CRAN this morning,
and will be uploaded to Debian as
well.
digest
creates hash digests of arbitrary R objects (using the md5,
sha-1, sha-256, sha-512,
crc32, xxhash32, xxhash64,
murmur32, spookyhash, blake3, and
crc32c algorithms) permitting easy comparison of R language
objects. It is a mature and widely-used (with 58.3 million downloads
just on the partial cloud mirrors of CRAN which keep logs) as many tasks
may involve caching of objects for which it provides convenient
general-purpose hash key generation to quickly identify the various
objects.
This release brings two changes.
First, we added crc32c
as a new hashing algorithm. And we did so in a portable minimal
fashion while also adding a new CRAN crc32c package with the full
hardware-optimised support form x86_64 and Arm64 (M1/M2) chips. Fully
integrating the optional added package is still work in progress we may
refine. (Now, as it turns out, a first bug report
that this is not as portable as we hoped. But it also looks like we
already have a fix. So a quick follow-up release is likely.) Second, Dean Attali had looked into AES
digests and cyphers using the CBC mode and noticed what we needed
padding which he kindly contributed in PR #186.
My CRANberries
provides the usual summary of changes to the previous
version. For questions or comments use the issue tracker
off the GitHub
repo.
If you like this or other open-source work I do, you can now sponsor me at
GitHub.
co2mon.nz currently uses monitors based on Oliver Seiler s open source design which I am personally building. This post describes my exploration of how to achieve production of a CO2 monitor that could enable the growth of co2mon.nz.
Goals
Primarily I want to design a CO2 monitor which allows the majority of the production process to be outsourced. In particular, the PCB should be able to be assembled in an automated fashion (PCBA).
As a secondary goal, I d like to improve the aesthetics of the monitor while retaining the unique feature of displaying clear visual indication of the current ventilation level through coloured lights.
Overall, I ll consider the project successfull if I can achieve a visually attractive CO2 monitor which takes me less than 10 minutes per monitor to assemble/box/ship and whose production cost has the potential to be lower than the current model.
PCB
Schematic
The existing CO2 monitor design provides a solid foundation but relies upon the ESP32 Devkit board, which is intended for evaluation purposes and is not well suited to automated assembly. Replacing this devkit board with the underlying ESP32 module is the major change needed to enable PCBA production, which then also requires moving the supporting electronics from the devkit board directly onto the primary PCB.
The basic ESP32 chipset used in the devkit boards is no longer available as a discrete module suitable for placement directly onto a PCB which means the board will also have to be updated to use a more modern variant of the ESP32 chipset which is in active production such as the ESP32-S3. The ESP32-S3-WROOM1-N4 module is a very close match to the original devkit and will be suitable for this project.
In addition to the change of ESP module, I made the following other changes to the components in use:
Added an additional temperature/humidity sensor (SHT30). The current monitors take temperature/humidity measurements from the SCD40 chipset. These are primarily intended to help in the calculation of CO2 levels and rely on an offset being subtracted to account for the heat generated by the electronic components themselves. I ve found their accuracy to OK, but not perfect. SHT30 is a cheap part, so its addition to hopefully provide improved temperature/humidity measurement is an easy choice.
Swapped to USB-C instead of USB-B for the power connector. USB-C is much more common than USB-B and is also smaller and not as tall off the board which provides more flexibility in the case design.
With major components selected the key task is to draw the schematic diagram describing how they electrically connect to each other, which includes all the supporting electronics (e.g. resistors, capacitors, etc) needed.
I started out trying to use the EasyEDA/OSHWLab ecosystem thinking the tight integration with JLCPCB s assembly services would be a benefit, but the web interface was too clunky and limiting and I quickly got frustrated. KiCad proved to be a much more pleasant and capable tool for the job.
The reference design in the ESP32 datasheet (p28) and USB-C power supply examples from blnlabs were particularly helpful alongside the KiCad documentation and the example of the existing monitor in completing this step (click the image to enlarge).
Layout
The next step is to physically lay out where each component from the schematic will sit on the PCB itself. Obviously this requires first determining the overall size, shape and outline of the board and needs to occur in iteration with the intended design of the overall monitor, including the case, to ensure components like switches and USB sockets line up correctly.
In addition to the requirements around the look and function of the case, the components themselves also have considerations that must be taken into account, including:
For best WiFi reception, the ESP32 antenna should be at the top of the monitor and should not have PCB underneath it, or for a specified distance either side of it.
The SHT30 temperature sensor should be as far from any heat generating components (e.g. the ESP32, BME680 and SCD40 modules) as possible and also considering that any generated heat will rise, as low on the monitor as possible.
The sensors measuring the air (SCD40, BME680 and SHT30) must have good exposure to the air outside the case.
Taking all of these factors into account I ended up with a square PCB containing a cutout in the top right so that the ESP32 antenna can sit within the overall square outline while still meeting its design requirements. The SCD40 and BME680 sit in the top left corner, near the edges for good airflow and far away from the SHT30 temperature sensor in the bottom left corner. The LEDs I placed in a horizontal row across the center of the board, the LCD in the bottom right, a push button on the right-hand side and the USB-C socket in the center at the bottom.
Once the components are placed, the next big task is to route the traces (aka wires) between the components on the board such that all the required electrical connections are made without any unintended connections (aka shorts) being created. This is a fun constraint solving/optimisation challenge and takes on an almost artistic aspect with other PCB designers often having strong opinions on which layout is best. The majority of the traces and routing for this board were able to be placed on the top layer of the PCB, but I also made use of the back layer for a few traces to help avoid conflicts and deal with places where different traces needed to cross each other. It s easy to see how this step would be much more challenging and time consuming on a larger and more complex PCB design.
The final touches were to add some debugging breakouts for the serial and JTAG ports on the ESP32-S3 and a logo and various other helpful text on the silkscreen layer that will be printed on the PCB so it looks nice.
Production
For assembly of the PCB, I went with JLCPCB based out of China. The trickiest part of the process was component selection and ensuring that the parts I had planned in the schematic were available. JLCPCB in conjunction with lcsc.com provides a basic and extended part library. If you use only basic parts you get quicker and cheaper assembly, while using extended parts bumps your order into a longer process with a small fee charged for each component on the board.
Initially I spent a lot of time selecting components (particularly LEDs and switches) that were in the basic library before realising that the ESP32 modules are only available in the extended library! I think the lesson is that unless you re building the most trivial PCB with only passive components you will almost certainly end up in the advanced assembly process anyway, so trying to stay within the basic parts library is not worth the time.
Unfortunately the SCD40 sensor, the most crucial part of the monitor, is not stocked at all by JLCPCB/LCSC! To work around this JLCPCB will maintain a personal component library for you when you ship components to them to for use in future orders. Given the extra logistical time and hassle of having to do this, combined with having a number of SCD40 components already on hand I decided to have the boards assembled without this component populated for the initial prototype run. This also had the benefit of lowering the risk if something went wrong as the cost of the SCD40 is greater than the cost of the PCB and all the other components combined!
I found the kicad-jlcpcb-tools plugin for KiCad invaluable for keeping track of what part from lcsc.com I was planning to use for each component and generating the necessary output files for JLCPCB. The plugin allows you to store these mappings in your actual schematic which is very handy. The search interface it provides is fairly clunky and I found it was often easier to search for the part I needed on lcsc.com and then just copy the part number across into the plugin s search box rather than trying to search by name or component type.
The LCD screen is the remaining component which is not easily assembled onto the PCB directly, but as you ll see next, this actually turned out to be OK as integrating the screen directly into the case makes the final assembly process smoother.
The final surprise in the assembly process was the concept of edge rails, additional PCB material that is needed on either side of the board to help with feeding it through the assembly machine in the correct position. These can be added automatically by JLCPCB and have to be snapped off after the completed boards are received. I hadn t heard about these before and I was a little worried that they d interfere or get in the way of either the antenna cut-out at the top of the board, or the switch on the right hand side as it overhangs the edge so it can sit flush with the case.
In the end there was no issue with the edge rails. The switch was placed hanging over them without issue and snapping them off once the boards arrived was a trivial 30s job using a vice to hold the edge rail and then gently tipping the board over until it snapped off - the interface between the board and the rails while solid looking has obviously been scored or perforated in some way during the production process so the edge breaks cleanly and smoothly. Magic!
The process was amazingly quick with the completed PCBs (picture above) arriving within 7 days of the order being placed and looking amazing.
Case
Design
I mocked up a very simple prototype of the case in FreeCAD during the PCB design process to help position and align the placement of the screen, switch and USB socket on the PCB as all three of these components interface directly with the edges of the case. Initially this design was similar to the current monitor design where the PCB (with lights and screen attached) sits in the bottom of the case, which has walls containing grilles for airflow and then a separate transparent perspex is screwed onto the top to complete the enclosure.
As part of the aesthetic improvements for the new monitor I wanted to move away from a transparent front panel to something opaque but still translucent enough to allow the colour of the lights to show through. Without a transparent front panel the LCD also needs to be mounted directly into the case itself.
The first few prototype iterations followed the design of the original CO2 monitor with a flat front panel that attaches to the rest of the case containing the PCB, but the new requirement to also attach the LCD to the front panel proved to make this unworkable. To stay in place the LCD has to be pushed onto mounting poles containing a catch mechanism which requires a moderate amount of force and applying that force to the LCD board when it is already connected to the PCB is essentially impossible.
As a result I ended up completely flipping the design such that the front panel is a single piece of plastic that also encompasses the walls of the case and contains appropriate mounting stakes for both the screen and the main PCB.
Getting to this design hugely simplified the assembly process. Starting with an empty case lying face down on a bench, the LCD screen is pushed onto the mounting poles and sits flush with the cover of the case - easily achieved without the main PCB yet in place.
Next, the main PCB is gently lowered into the case facing downwards and sits on the mounting pole in each corner with the pins for the LCD just protruding through the appropriate holes in the PCB ready to be quickly soldered into place (this took significant iteration and tuning of dimensions/positioning to achieve!).
Finally, a back panel can be attached which holds the PCB in place and uses cantilever snap joints to click on to the rest of the case.
Overall the design is a huge improvement over the previous case which required screws and spacers to position the PCB and cover relative to the rest of the case, with the spacers and screws being particularly fiddly to work with.
The major concern I had with the new design was that the mount to attach the monitor to the wall has moved from being attached to the main case and components directly to needing to be on the removable back panel - if the clips holding this panel to the case fail the core part of the monitor will fall off the wall which would not be good. To guard against this I ve doubled the size and number of clips at the top of the case (which bears the weight) and the result seems very robust in my testing. To completely assemble a monitor, including the soldering step takes me about 2-3 minutes individually, and would be even quicker if working in batches.
Production
Given the number of design/testing iterations required to fine tune the case I chose not to outsource case production for now and used my 3D printer to produce them. I ve successfully used JLCPCB s 3D printing service for the previous case design, so I m confident that getting sufficient cases printed from JLCPCB or another supplier will not be an issue now that the design is finalised.
I tried a variety of filament colours, but settled on a transparent filament which once combined in the necessary layers to form the case is not actually transparent like perspex is, but provides a nice translucent medium which achieves the goal of having the light colour visible without exposing all of the circuit board detail. There s room for future improvement in the positioning of the LEDs on the circuit board to provide a more even distribution of light across the case but overall I really like the way the completed monitor ends up looking.
Evaluation
Building this monitor has been a really fun project, both in seeing something progress from an idea, to plans on a screen to a nice physical thing on my wall, but also in learning and developing a bunch of new skills in PCB design, assembly and 3D design.
The goal of having a CO2 monitor which I can outsource the vast majority of production of is as close to being met as I think is possible without undertaking the final proof of placing a large order. I ve satisfied myself that each step is feasible and that the final assembly process is quick, easy and well below the level of effort and time it was taking me to produce the original monitors.
Cost wise it s also a huge win, primarily in terms of the time taken, but also in the raw components - currently the five prototypes I ordered and built are on par with the component cost of the original CO2 monitor, but this will drop further with larger orders due to price breaks and amortisation of the setup and shipping expenses across more monitors.
This project has also given me a much better appreciation for how much I m only just scratching the surface of the potential complexities and challenges in producing a hardware product of this type.
I m reasonably confident I could successfully produce a few hundred and maybe even a few thousand monitors using this approach, but it s also clear that getting beyond that point is and would be a whole further level of effort and learning.
Hardware is hard work. That s not news to anyone, including me, but there is something to be said for experiencing the process first hand to make the reality of what s required real.
The PCB and case designs are both shared and can be found at https://github.com/co2monnz/co2monitor-pcb and https://github.com/co2monnz/cad, feedback and suggestions welcome!
EDIT: One of my 2 keys has died. There are what seems like golden bubbles
under the epoxy, over one of the chips and those were not there before. I've
emailed SoloKeys and I'm waiting for a reply, but for now, I've stopped using
the Solo V2 altogether :(
I recently received the two Solo V2 hardware tokens I ordered as part of their
crowdfunding campaign, back in March 2022. It did take them longer than
advertised to ship me the tokens, but that's hardly unexpected from such
small-scale, crowdfunded undertaking.
I'm mostly happy about my purchase and I'm glad to get rid of the aging Tomu
boards I was using as U2F tokens1. Still, beware: I am not sure
it's a product I would recommend if what you want is simply something that
works. If you do not care about open-source hardware, the Solo V2 is not for
you.
The Good
I first want to mention I find the Solo V2 gorgeous. I really like the black and
gold color scheme of the USB-A model (which is reversible!) and it seems like a
well built and solid device. I'm not afraid to have it on my keyring and I fully
expect it to last a long time.
I'm also very impressed by the modular design: the PCB sits inside a shell,
which decouples the logic from the USB interface and lets them manufacture a
single board for both the USB-C and USB-A models. The clear epoxy layer on top
of the PCB module also looks very nice in my opinion.
I'm also very happy the Solo V2 has capacitive touch buttons instead of
physical "clicky" buttons, as it means the device has no moving parts. The
token has three buttons (the gold metal strips): one on each side of the device
and a third one near the keyhole.
As far as I've seen, the FIDO2 functions seem to work well via the USB
interface and do not require any configuration on a Debian 12 machine. I've
already migrated to the Solo V2 for web-based 2FA and I am in the process of
migrating to an SSH ed25519-sk key. Here is a guide I recommend if
you plan on setting those up with a Solo V2.
The Bad and the Ugly
Sadly, the Solo V2 is far from being a perfect project. First of all, since the
crowdfunding campaign is still being fulfilled, it is not currently
commercially available. Chances are you won't be able to buy one directly
before at least Q4 2023.
I've also hit what seems to be a pretty big firmware bug, or at least, one that
affects my use case quite a bit. Invoking gpg crashes the Solo V2 completely
if you also have scdaemon installed. Since scdaemon is necessary to use
gpg with an OpenPGP smartcard, this means you cannot issue any gpg commands
(like signing a git commit...) while the Solo V2 is plugged in.
Any gpg commands that queries scdaemon, such as gpg --edit-card or gpg
--sign foo.txt times out after about 20 seconds and leaves the token
unresponsive to both touch and CLI commands.
The way to "fix" this issue is to make sure scdaemon does not interact with
the Solo V2 anymore, using the reader-port argument:
Plug both your Solo V2 and your OpenPGP smartcard
To get a list of the tokens scdaemon sees, run the following command: $
echo scd getinfo reader_list gpg-connect-agent --decode awk '/^D/ print
$2 '
Identify your OpenPGP smartcard. For example, my Nitrokey Start is listed as
20A0:4211:FSIJ-1.2.15-43211613:0
Create a file in ~/.gnupg/scdaemon.conf with the following line
reader-port $YOUR_TOKEN_ID. For example, in my case I have: reader-port
20A0:4211:FSIJ-1.2.15-43211613:0
Reload scdaemon: $ gpgconf --reload scdaemon
Although this is clearly a firmware bug2, I do believe GnuPG is also
partly to blame here. Let's just say I was not very surprised to have to battle
scdaemon again, as I've had previous issues with it.
Which leads me to my biggest gripe so far: it seems SoloKeys (the company)
isn't really fixing firmware issues anymore and doesn't seems to care. The last
firmware release is about a year old.
Although people are experiencing serious bugs, there is no official way to
report them, which leads to issues being seemingly ignored. For
example, the NFC feature is apparently killing keys (!!!), but no one
from the company seems to have acknowledged the issue. The same goes for my
GnuPG bug, which was flagged in September 2022.
For a project that mainly differentiates itself from its (superior) competition
by being "Open", it's not a very good look... Although SoloKeys is still an
unprofitable open source side business of its creators3, this kind of
attitude certainly doesn't help foster trust.
Conclusion
If you want to have a nice, durable FIDO2 token, I would suggest you get one of
the many models Yubico offers. They are similarly priced, are readily
commercially available, are part of a nice and maintained software ecosystem
and have more features than the Solo V2 (OpenPGP support being the one I miss
the most). Yubikeys are the practical option.
What they are not is open-source hardware, whereas the Solo V2 is. As
bunnie very well explained on his blog in 2019, it does not mean
the later is inherently more trustable than the former, but it does make the
Solo V2 the ideological option. Knowledge is power and it should be free.
As such, tread carefully with SoloKeys, but don't dismiss them altogether: the
Solo V2 is certainly functioning well enough for me.
Although U2F is still part of the FIDO2 specification, the Tomus
predate this standard and were thus not fully compliant with FIDO2. So long
and thanks for all the fish little boards, you've served me well!
When I connect my Desklab USB-C monitor [1] (which has been vastly underused for the last 3 years) into a Linux system the display type is listed as DO NOT USE RTK .
One of the more informative discussions of this was on Linux Mint forums [2] which revealed that it s a mapping for an code that shouldn t be used. So it s not saying don t use this monitor it s saying don t use this code . So the Desklab people when they implemented a display with an RTK chipset should have changed the ID field from RTK to something representing their use. On Debian the file /usr/share/hwdata/pnp.ids has the IDs and you can grep for RTK in that.
Also for programmers, please use more descriptive strings than do not use , when I was trying to find this on Debian code search [3] it turned up hundreds of pages of results which was more than a human can read through. If the text had been something that would make sense to a user such as OEM please replace with company name it would have made it very clear to me (and all the other people searching for this) what it meant and the fact that Desklab had stuffed up. So instead of wondering about this for years before eventually finding the right Google search to find the answer I could have worked it out immediately if the text had been clearer.
India Press Freedom
Just about a week back, India again slipped in the Freedom index, this time falling to 161 out of 180 countries. The RW again made lot of noise as they cannot fathom why it has been happening so. A recent news story gives some idea. Every year NCRB (National Crime Records Bureau) puts out its statistics of crimes happening across the country. The report is in public domain. Now according to report shared, around 40k women from Gujarat alone disappeared in the last five years. This is a state where BJP has been ruling for the last 30 odd years. When this report became viral, almost all national newspapers the news was censored/blacked out. For e.g. check out newindianexpress.com, likewise TOI and other newspapers, the news has been 404. The only place that you can get that news is in minority papers like siasat. But the story didn t remain till there. While the NCW (National Commission of Women) pointed out similar stuff happening in J&K, Gujarat Police claimed they got almost 39k women back. Now ideally, it should have been in NCRB data as an addendum as the report can be challenged. But as this news was made viral, nobody knows the truth or false in the above. What BJP has been doing is whenever they get questioned, they try to muddy the waters like that. And most of the time, such news doesn t make to court so the party gets a freebie in a sort as they are not legally challenged. Even if somebody asks why didn t Gujarat Police do it as NCRB report is jointly made with the help of all states, and especially with BJP both in Center and States, they cannot give any excuse. The only excuse you see or hear is whataboutism unfortunately
Profiteering on I.T. Hardware
I was chatting with a friend yesterday who is an enthusiast like me but has been more alert about what has been happening in the CPU, motherboard, RAM world. I was simply shocked to hear the prices of motherboards which are three years old, even a middling motherboard. For e.g. the last time I bought a mobo, I spent about 6k but that was for an ATX motherboard. Most ITX motherboards usually sold for around INR 4k/- or even lower. I remember Via especially as their mobos were even cheaper around INR 1.5-2k/-. Even before pandemic, many motherboard manufacturers had closed down shop leaving only a few in the market. As only a few remained, prices started going higher. The pandemic turned it to a seller s market overnight as most people were stuck at home and needed good rigs for either work or leisure or both. The manufacturers of CPU, motherboards, GPU s, Powersupply (SMPS) named their prices and people bought it. So in 2023, high prices remained while warranty periods started coming down. Governments also upped customs and various other duties. So all are in hand in glove in the situation. So as shared before, what I have been offered is a 4 year motherboard with a CPU of that time. I haven t bought it nor do I intend to in short-term future but extremely disappointed with the state of affairs
AMD Issues
It s just been couple of hard weeks apparently for AMD. The first has been the TPM (Trusted Platform Module) issue that was shown by couple of security researchers. From what is known, apparently with $200 worth of tools and with sometime you can hack into somebody machine if you have physical access. Ironically, MS made a huge show about TPM and also made it sort of a requirement if a person wanted to have Windows 11. I remember Matthew Garett sharing about TPM and issues with Lenovo laptops. While AMD has acknowledged the issue, its response has been somewhat wishy-washy. But this is not the only issue that has been plaguing AMD. There have been reports of AMD chips literally exploding and again AMD issuing a somewhat wishy-washy response. Asus though made some changes but is it for Zen4 or only 5 parts, not known. Most people are expecting a recession in I.T. hardware this year as well as next year due to high prices. No idea if things will change, if ever
CAT-6 patch cord & ONU
Few months back I was offered a fibre service. Most of the service offering has been using Chinese infrastructure including the ONU (Optical Network Unit). Wikipedia doesn t have a good page on ONU hence had to rely on third-party sites. FS (a name I don t really know) has some (good basic info. on ONU and how it s part and parcel of the whole infrastructure. I also got an ONT (Optical Network Terminal) but it seems to be very basic and mostly dumb. I used the old CAT-6 cable ( a decade old) to connect them and it worked for couple of months. Had to change it, first went to know if a higher cable solution offered themselves. CAT-7 is there but not backward compatible. CAT-8 is the next higher version but apparently it s expensive and also not easily bought. I did quite a few tests on CAT-6 and the ONU and it conks out at best 1 mbps which is still far better than what I am used to. CAT-8 are either not available or simply too expensive for home applications atm. A good summary of CAT-8 and what they stand for can be found here. The networking part is hopeless as most consumer facing CPU s and motherboards don t even offer 10 mbps, so asking anything more is just overkill without any benefit. Which does bring me to the next question, something that I may do in a few months or a year down the road. Just to clarify they may say it is 100 mbps or even 1 Gbps but that s plain wrong.
AMD APU, Asus Motherboard & Dealerships
I had been thinking of an AMD APU, could wait a while but sooner or later would have to get one. I got quoted an AMD Ryzen 3 3200G with an Asus A320 Motherboard for around 14k which kinda looked steep to me. Quite a few hardware dealers whom I had traded, consulted over years simply shut down. While there are new people, it s much more harder now to make relationships (due to deafness) rather than before. The easiest to share which was also online was pcpartpicker.com that had an Indian domain now no longer available. The number of offline brick and mortar PC business has also closed quite a bit. There are a few new ones but it takes time and the big guys have made more of a killing. I was shocked quite a bit. Came home and browsed a bit and was hit by this. Both AMD and Intel PC business has taken a beating. AMD a bit more as Intel still holds part of the business segment as traditionally been theirs. There have been proofs and allegations of bribing in the past (do remember the EU Antitrust case against Intel for monopoly) but Intel s own cutting corners with the Spectre and Meltdown flaws hasn t helped its case, nor the suits themselves. AMD on the other hand under expertise of Lisa Su has simply grown strength by strength. Inflation and Profiteering by other big companies has made the outlook for both AMD and Intel a bit lackluster. AMD is supposed to show Zen5 chips in a few days time and the rumor mill has been ongoing.
Correction Not few days but 2025.
Personally, I would be happy with maybe a Ryzen 5600G with an Asus motherboard. My main motive whenever I buy an APU is not to hit beyond 65 TDP. It s kinda middle of the road. As far as what I could read this year and next year we could have AM4+ or something like those updates, AM5 APU s, CPU s and boards are slated to be launched in 2025. I did see pcpricetracker and it does give idea of various APU prices although have to say pcpartpicker was much intuitive to work with than the above.
I just had my system cleaned couple of months so touchwood I should be able to use it for another couple of years or more before I have to get one of these APU s and do hope they are worth it. My idea is to use that not only for testing various softwares but also delve a bit into VR if that s possible. I did read a bit about deafness and VR as well. A good summary can be found here. I am hopeful that there may be few people in the community who may look and respond to that. It s crucial.
TRAI-caller, Privacy 101& Element.
While most of us in Debian and FOSS communities do engage in privacy, lots of times it s frustrating. I m always looking for videos that seek to share that view why Privacy is needed by individuals and why Governments and other parties hate it. There are a couple of basic Youtube Videos that does explain the same quite practically.
Now why am I sharing the above. It isn t that people do not privacy and how we hold it dear. I share it because GOI just today blocked Element. While it may be trivial for us to workaround the issues, it does tell what GOI is doing. And it still acts as if surprised why it s press ranking is going to pits.
Even our Women Wrestlers have been protesting for a week to just file an FIR (First Information Report) . And these are women who have got medals for the country. More than half of these organizations, specifically the women wrestling team don t have POSH which is a mandatory body supposed to be in every organization. POSH stands for Prevention of Sexual Harassment at Workplace. The gentleman concerned is a known rowdy/Goon hence it took almost a week of protest to do the needful
I do try not to report because right now every other day we see somewhere or the other the Govt. curtailing our rights and most people are mute
Signing out, till later
Way back at DebConf16Gunnar managed to arrange for a number of Next Thing Co. C.H.I.P. boards to be distributed to those who were interested. I was lucky enough to be amongst those who received one, but I have to confess after some initial experimentation it ended up sitting in its box unused.
The reasons for that were varied; partly about not being quite sure what best to do with it, partly due to a number of limitations it had, partly because NTC sadly went insolvent and there was less momentum around the hardware. I ve always meant to go back to it, poking it every now and then but never completing a project. I m finally almost there, and I figure I should write some of it up.
TL;DR: My C.H.I.P. is currently running a mainline Linux 6.3 kernel with only a few DTS patches, an upstream u-boot v2022.1 with a couple of minor patches and an unmodified Debian bullseye armhf userspace.
Storage
The main issue with the C.H.I.P. is that it uses MLC NAND, in particular mine has an 8MB H27QCG8T2E5R. That ended up unsupported in Linux, with the UBIFS folk disallowing operation on MLC devices. There s been subsequent work to enable an SLC emulation mode which makes the device more reliable at the cost of losing capacity by pairing up writes/reads in cells (AFAICT). Some of this hit for the H27UCG8T2ETR in 5.16 kernels, but I definitely did some experimentation with 5.17 without having much success. I should maybe go back and try again, but I ended up going a different route.
It turned out that BytePorter had documented how to add a microSD slot to the NTC C.H.I.P., using just a microSD to full SD card adapter. Every microSD card I buy seems to come with one of these, so I had plenty lying around to test with. I started with ensuring the kernel could see it ok (by modifying the device tree), but once that was all confirmed I went further and built a more modern u-boot that talked to the SD card, and defaulted to booting off it. That meant no more relying on the internal NAND at all!
I do see some flakiness with the SD card, which is possibly down to the dodgy way it s hooked up (I should probably do a basic PCB layout with JLCPCB instead). That s mostly been mitigated by forcing it into 1-bit mode instead of 4-bit mode (I tried lowering the frequency too, but that didn t make a difference).
The problem manifests as:
sunxi-mmc 1c11000.mmc: data error, sending stop command
and then all storage access freezing (existing logins still work, if the program you re trying to run is in cache). I can t find a conclusive software solution to this; I m pretty sure it s the hardware, but I don t understand why the recovery doesn t generally work.
Random power offs
After I had storage working I d see random hangs or power offs. It wasn t quite clear what was going on. So I started trying to work out how to find out the CPU temperature, in case it was overheating. It turns out the temperature sensor on the R8 is part of the touchscreen driver, and I d taken my usual approach of turning off all the drivers I didn t think I d need. Enabling it (CONFIG_TOUCHSCREEN_SUN4I) gave temperature readings and seemed to help somewhat with stability, though not completely.
Next I ended up looking at the AXP209 PMIC. There were various scripts still installed (I d started out with the NTC Debian install and slowly upgraded it to bullseye while stripping away the obvious pieces I didn t need) and a start-up script called enable-no-limit. This turned out to not be running (some sort of expectation of i2c-dev being loaded and another failing check), but looking at the script and the data sheet revealed the issue.
The AXP209 can cope with 3 power sources; an external DC source, a Li-battery, and finally a USB port. I was powering my board via the USB port, using a charger rated for 2A. It turns out that the AXP209 defaults to limiting USB current to 900mA, and that with wifi active and the CPU busy the C.H.I.P. can rise above that. At which point the AXP shuts everything down. Armed with that info I was able to understand what the power scripts were doing and which bit I needed - i2cset -f -y 0 0x34 0x30 0x03 to set no limit and disable the auto-power off. Additionally I also discovered that the AXP209 had a built in temperature sensor as well, so I added support for that via iio-hwmon.
WiFi
WiFi on the C.H.I.P. is provided by an RTL8723BS SDIO attached device. It s terrible (and not just here, I had an x86 based device with one where it also sucked). Thankfully there s a driver in staging in the kernel these days, but I ve still found it can fall out with my house setup, end up connecting to a further away AP which then results in lots of retries, dropped frames and CPU consumption. Nailing it to the AP on the other side of the wall from where it is helps. I haven t done any serious testing with the Bluetooth other than checking it s detected and can scan ok.
Patches
I patched u-boot v2022.01 (which shows you how long ago I was trying this out) with the following to enable boot from external SD:
u-boot C.H.I.P. external SD patch
I ve sent some patches for the kernel device tree upstream - there s an outstanding issue with the Bluetooth wake GPIO causing the serial port not to probe(!) that I need to resolve before sending a v2, but what s there works for me.
The only remaining piece is patch to enable the external SD for Linux; I don t think it s appropriate to send upstream but it s fairly basic. This limits the bus to 1 bit rather than the 4 bits it s capable of, as mentioned above.
Linux C.H.I.P. external SD DTS patch
diff
diff --git a/arch/arm/boot/dts/sun5i-r8-chip.dts b/arch/arm/boot/dts/sun5i-r8-chip.dts
index fd37bd1f3920..2b5aa4952620 100644
--- a/arch/arm/boot/dts/sun5i-r8-chip.dts
+++ b/arch/arm/boot/dts/sun5i-r8-chip.dts
@@ -163,6 +163,17 @@ &mmc0
status = "okay";
;
+&mmc2
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_4bit_pe_pins>;
+ vmmc-supply = <®_vcc3v3>;
+ vqmmc-supply = <®_vcc3v3>;
+ bus-width = <1>;
+ non-removable;
+ disable-wp;
+ status = "okay";
+ ;
+
&ohci0
status = "okay";
;
As for what I m doing with it, I think that ll have to be a separate post.
I d like to describe and discuss a threat model for computational devices. This is generic but we will narrow it down to security-related devices. For example, portable hardware dongles used for OpenPGP/OpenSSH keys, FIDO/U2F, OATH HOTP/TOTP, PIV, payment cards, wallets etc and more permanently attached devices like a Hardware Security Module (HSM), a TPM-chip, or the hybrid variant of a mostly permanently-inserted but removable hardware security dongles.
Our context is cryptographic hardware engineering, and the purpose of the threat model is to serve as as a thought experiment for how to build and design security devices that offer better protection. The threat model is related to the Evil maid attack.
Our focus is to improve security for the end-user rather than the traditional focus to improve security for the organization that provides the token to the end-user, or to improve security for the site that the end-user is authenticating to. This is a critical but often under-appreciated distinction, and leads to surprising recommendations related to onboard key generation, randomness etc below.
The Substitution Attack
Your takeaway should be that devices should be designed to mitigate harmful consequences if any component of the device (hardware or software) is substituted for a malicious component for some period of time, at any time, during the lifespan of that component. Some designs protect better against this attack than other designs, and the threat model can be used to understand which designs are really bad, and which are less so.
Terminology
The threat model involves at least one device that is well-behaving and one that is not, and we call these Good Device and Bad Device respectively. The bad device may be the same physical device as the good key, but with some minor software modification or a minor component replaced, but could also be a completely separate physical device. We don t care about that distinction, we just care if a particular device has a malicious component in it or not. I ll use terms like security device , device , hardware key , security co-processor etc interchangeably.
From an engineering point of view, malicious here includes unintentional behavior such as software or hardware bugs. It is not possible to differentiate an intentionally malicious device from a well-designed device with a critical bug.
Don t attribute to malice what can be adequately explained by stupidity, but don t na vely attribute to stupidity what may be deniable malicious.
What is some period of time ?
Some period of time can be any length of time: seconds, minutes, days, weeks, etc.
It may also occur at any time: During manufacturing, during transportation to the user, after first usage by the user, or after a couple of months usage by the user. Note that we intentionally consider time-of-manufacturing as a vulnerable phase.
Even further, the substitution may occur multiple times. So the Good Key may be replaced with a Bad Key by the attacker for one day, then returned, and later this repeats a month later.
What is harmful consequences ?
Since a security key has a fairly well-confined scope and purpose, we can get a fairly good exhaustive list of things that could go wrong. Harmful consequences include:
Attacker learns any secret keys stored on a Good Key.
Attacker causes user to trust a public generated by a Bad Key.
Attacker is able to sign something using a Good Key.
Attacker learns the PIN code used to unlock a Good Key.
Attacker learns data that is decrypted by a Good Key.
Thin vs Deep solutions
One approach to mitigate many issues arising from device substitution is to have the host (or remote site) require that the device prove that it is the intended unique device before it continues to talk to it. This require an authentication/authorization protocol, which usually involves unique device identity and out-of-band trust anchors. Such trust anchors is often problematic, since a common use-case for security device is to connect it to a host that has never seen the device before.
A weaker approach is to have the device prove that it merely belongs to a class of genuine devices from a trusted manufacturer, usually by providing a signature generated by a device-specific private key signed by the device manufacturer. This is weaker since then the user cannot differentiate two different good devices.
In both cases, the host (or remote site) would stop talking to the device if it cannot prove that it is the intended key, or at least belongs to a class of known trusted genuine devices.
Upon scrutiny, this solution is still vulnerable to a substitution attack, just earlier in the manufacturing chain: how can the process that injects the per-device or per-class identities/secrets know that it is putting them into a good key rather than a malicious device? Consider also the consequences if the cryptographic keys that guarantee that a device is genuine leaks.
The model of the thin solution is similar to the old approach to network firewalls: have a filtering firewall that only lets through intended traffic, and then run completely insecure protocols internally such as telnet.
The networking world has evolved, and now we have defense in depth: even within strongly firewall ed networks, it is prudent to run for example SSH with publickey-based user authentication even on locally physical trusted networks. This approach requires more thought and adds complexity, since each level has to provide some security checking.
I m arguing we need similar defense-in-depth for security devices. Security key designs cannot simply dodge this problem by assuming it is working in a friendly environment where component substitution never occur.
Example: Device authentication using PIN codes
To see how this threat model can be applied to reason about security key designs, let s consider a common design.
Many security keys uses PIN codes to unlock private key operations, for example on OpenPGP cards that lacks built-in PIN-entry functionality. The software on the computer just sends a PIN code to the device, and the device allows private-key operations if the PIN code was correct.
Let s apply the substitution threat model to this design: the attacker replaces the intended good key with a malicious device that saves a copy of the PIN code presented to it, and then gives out error messages. Once the user has entered the PIN code and gotten an error message, presumably temporarily giving up and doing other things, the attacker replaces the device back again. The attacker has learnt the PIN code, and can later use this to perform private-key operations on the good device.
This means a good design involves not sending PIN codes in clear, but use a stronger authentication protocol that allows the card to know that the PIN was correct without learning the PIN. This is implemented optionally for many OpenPGP cards today as the key-derivation-function extension. That should be mandatory, and users should not use setups that sends device authentication in the clear, and ultimately security devices should not even include support for that. Compare how I build Gnuk on my PGP card with the kdf_do=required option.
Example: Onboard non-predictable key-generation
Many devices offer both onboard key-generation, for example OpenPGP cards that generate a Ed25519 key internally on the devices, or externally where the device imports an externally generated cryptographic key.
Let s apply the subsitution threat model to this design: the user wishes to generate a key and trust the public key that came out of that process. The attacker substitutes the device for a malicious device during key-generation, imports the private key into a good device and gives that back to the user. Most of the time except during key generation the user uses a good device but still the attacker succeeded in having the user trust a public key which the attacker knows the private key for. The substitution may be a software modification, and the method to leak the private key to the attacker may be out-of-band signalling.
This means a good design never generates key on-board, but imports them from a user-controllable environment. That approach should be mandatory, and users should not use setups that generates private keys on-board, and ultimately security devices should not even include support for that.
Example: Non-predictable randomness-generation
Many devices claims to generate random data, often with elaborate design documents explaining how good the randomness is.
Let s apply the substitution threat model to this design: the attacker replaces the intended good key with a malicious design that generates (for the attacker) predictable randomness. The user will never be able to detect the difference since the random output is, well, random, and typically not distinguishable from weak randomness. The user cannot know if any cryptographic keys generated by a generator was faulty or not.
This means a good design never generates non-predictable randomness on the device. That approach should be mandatory, and users should not use setups that generates non-predictable randomness on the device, and ideally devices should not have this functionality.
Case-Study: Tillitis
I have warmed up a bit for this. Tillitis is a new security device with interesting properties, and core to its operation is the Compound Device Identifier (CDI), essentially your Ed25519 private key (used for SSH etc) is derived from the CDI that is computed like this:
Let s apply the substitution threat model to this design: Consider someone replacing the Tillitis key with a malicious key during postal delivery of the key to the user, and the replacement device is identical with the real Tillitis key but implements the following key derivation function:
Where weakprng is a compromised algorithm that is predictable for the attacker, but still appears random. Everything will work correctly, but the attacker will be able to learn the secrets used by the user, and the user will typically not be able to tell the difference since the CDI is secret and the Ed25519 public key is not self-verifiable.
Conclusion
Remember that it is impossible to fully protect against this attack, that s why it is merely a thought experiment, intended to be used during design of these devices. Consider an attacker that never gives you access to a good key and as a user you only ever use a malicious device. There is no way to have good security in this situation. This is not hypothetical, many well-funded organizations do what they can to derive people from having access to trustworthy security devices. Philosophically it does not seem possible to tell if these organizations have succeeded 100% already and there are only bad security devices around where further resistance is futile, but to end on an optimistic note let s assume that there is a non-negligible chance that they haven t succeeded. In these situations, this threat model becomes useful to improve the situation by identifying less good designs, and that s why the design mantra of mitigate harmful consequences is crucial as a takeaway from this. Let s improve the design of security devices that further the security of its users!
CPUs can't do anything without being told what to do, which leaves the obvious problem of how do you tell a CPU to do something in the first place. On many CPUs this is handled in the form of a reset vector - an address the CPU is hardcoded to start reading instructions from when power is applied. The address the reset vector points to will typically be some form of ROM or flash that can be read by the CPU even if no other hardware has been configured yet. This allows the system vendor to ship code that will be executed immediately after poweron, configuring the rest of the hardware and eventually getting the system into a state where it can run user-supplied code.
The specific nature of the reset vector on x86 systems has varied over time, but it's effectively always been 16 bytes below the top of the address space - so, 0xffff0 on the 20-bit 8086, 0xfffff0 on the 24-bit 80286, and 0xfffffff0 on the 32-bit 80386. Convention on x86 systems is to have RAM starting at address 0, so the top of address space could be used to house the reset vector with as low a probability of conflicting with RAM as possible.
The most notable thing about x86 here, though, is that when it starts running code from the reset vector, it's still in real mode. x86 real mode is a holdover from a much earlier era of computing. Rather than addresses being absolute (ie, if you refer to a 32-bit address, you store the entire address in a 32-bit or larger register), they are 16-bit offsets that are added to the value stored in a "segment register". Different segment registers existed for code, data, and stack, so a 16-bit address could refer to different actual addresses depending on how it was being interpreted - jumping to a 16 bit address would result in that address being added to the code segment register, while reading from a 16 bit address would result in that address being added to the data segment register, and so on. This is all in order to retain compatibility with older chips, to the extent that even 64-bit x86 starts in real mode with segments and everything (and, also, still starts executing at 0xfffffff0 rather than 0xfffffffffffffff0 - 64-bit mode doesn't support real mode, so there's no way to express a 64-bit physical address using the segment registers, so we still start just below 4GB even though we have massively more address space available).
Anyway. Everyone knows all this. For modern UEFI systems, the firmware that's launched from the reset vector then reprograms the CPU into a sensible mode (ie, one without all this segmentation bullshit), does things like configure the memory controller so you can actually access RAM (a process which involves using CPU cache as RAM, because programming a memory controller is sufficiently hard that you need to store more state than you can fit in registers alone, which means you need RAM, but you don't have RAM until the memory controller is working, but thankfully the CPU comes with several megabytes of RAM on its own in the form of cache, so phew). It's kind of ugly, but that's a consequence of a bunch of well-understood legacy decisions.
Except. This is not how modern Intel x86 boots. It's far stranger than that. Oh, yes, this is what it looks like is happening, but there's a bunch of stuff going on behind the scenes. Let's talk about boot security. The idea of any form of verified boot (such as UEFI Secure Boot) is that a signature on the next component of the boot chain is validated before that component is executed. But what verifies the first component in the boot chain? You can't simply ask the BIOS to verify itself - if an attacker can replace the BIOS, they can replace it with one that simply lies about having done so. Intel's solution to this is called Boot Guard.
But before we get to Boot Guard, we need to ensure the CPU is running in as bug-free a state as possible. So, when the CPU starts up, it examines the system flash and looks for a header that points at CPU microcode updates. Intel CPUs ship with built-in microcode, but it's frequently old and buggy and it's up to the system firmware to include a copy that's new enough that it's actually expected to work reliably. The microcode image is pulled out of flash, a signature is verified, and the new microcode starts running. This is true in both the Boot Guard and the non-Boot Guard scenarios. But for Boot Guard, before jumping to the reset vector, the microcode on the CPU reads an Authenticated Code Module (ACM) out of flash and verifies its signature against a hardcoded Intel key. If that checks out, it starts executing the ACM. Now, bear in mind that the CPU can't just verify the ACM and then execute it directly from flash - if it did, the flash could detect this, hand over a legitimate ACM for the verification, and then feed the CPU different instructions when it reads them again to execute them (a Time of Check vs Time of Use, or TOCTOU, vulnerability). So the ACM has to be copied onto the CPU before it's verified and executed, which means we need RAM, which means the CPU already needs to know how to configure its cache to be used as RAM.
Anyway. We now have an ACM loaded and verified, and it can safely be executed. The ACM does various things, but the most important from the Boot Guard perspective is that it reads a set of write-once fuses in the motherboard chipset that represent the SHA256 of a public key. It then reads the initial block of the firmware (the Initial Boot Block, or IBB) into RAM (or, well, cache, as previously described) and parses it. There's a block that contains a public key - it hashes that key and verifies that it matches the SHA256 from the fuses. It then uses that key to validate a signature on the IBB. If it all checks out, it executes the IBB and everything starts looking like the nice simple model we had before.
Except, well, doesn't this seem like an awfully complicated bunch of code to implement in real mode? And yes, doing all of this modern crypto with only 16-bit registers does sound like a pain. So, it doesn't. All of this is happening in a perfectly sensible 32 bit mode, and the CPU actually switches back to the awful segmented configuration afterwards so it's still compatible with an 80386 from 1986. The "good" news is that at least firmware can detect that the CPU has already configured the cache as RAM and can skip doing that itself.
I'm skipping over some steps here - the ACM actually does other stuff around measuring the firmware into the TPM and doing various bits of TXT setup for people who want DRTM in their lives, but the short version is that the CPU bootstraps itself into a state where it works like a modern CPU and then deliberately turns a bunch of the sensible functionality off again before it starts executing firmware. I'm also missing out the fact that this entire process only kicks off after the Management Engine says it can, which means we're waiting for an entirely independent x86 to boot an entire OS before our CPU even starts pretending to execute the system firmware.
Of course, as mentioned before, on modern systems the firmware will then reprogram the CPU into something actually sensible so OS developers no longer need to care about this[1][2], which means we've bounced between multiple states for no reason other than the possibility that someone wants to run legacy BIOS and then boot DOS on a CPU with like 5 orders of magnitude more transistors than the 8086.
tl;dr why can't my x86 wake up with the gin protected mode already inside it
[1] Ha uh except that on ACPI resume we're going to skip most of the firmware setup code so we still need to handle the CPU being in fucking 16-bit mode because suspend/resume is basically an extremely long reboot cycle
[2] Oh yeah also you probably have multiple cores on your CPU and well bad news about the state most of the cores are in when the OS boots because the firmware never started them up so they're going to come up in 16-bit real mode even if your boot CPU is already in 64-bit protected mode, unless you were using TXT in which case you have a different sort of nightmare that if we're going to try to map it onto real world nightmare concepts is one that involves a lot of teeth. Or, well, that used to be the case, but ACPI 6.4 (released in 2021) provides a mechanism for the OS to ask the firmware to wake the CPU up for it so this is invisible to the OS, but you're still relying on the firmware to actually do the heavy lifting here
I recently bought a Banana Pi BPI-M5, which uses the Amlogic S905X3 SoC: these are my notes about installing Debian on it.
While this SoC is supported by the upstream U-Boot it is not supported by the Debian U-Boot package, so debian-installer does not work. Do not be fooled by seeing the DTB file for this exact board being distributed with debian-installer: all DTB files are, and it does not mean that the board is supposed to work.
As I documented in #1033504, the Debian kernels are currently missing some patches needed to support the SD card reader.
I started by downloading an Armbian Banana Pi image and booted it from an SD card. From there I partitioned the eMMC, which always appears as /dev/mmcblk1:
Make sure to leave enough space before the first partition, or else U-Boot will overwrite it: as it is common for many ARM SoCs, U-Boot lives somewhere in the gap between the MBR and the first partition.
I looked at Armbian's /usr/lib/u-boot/platform_install.sh and installed U-Boot by manually copying it to the eMMC:
I wanted to have a fully working flash-kernel, so I used Armbian's boot.scr as a template to create /etc/flash-kernel/bootscript/bootscr.meson and then added a custom entry for the Banana Pi to /etc/flash-kernel/db:
All things considered I do not think that I would recommend to Debian users to buy Amlogic-based boards since there are many other better supported SoCs.
I ve used hardware-backed OpenPGP keys since 2006 when I imported newly generated rsa1024 subkeys to a FSFE Fellowship card. This worked well for several years, and I recall buying more ZeitControl cards for multi-machine usage and backup purposes. As a side note, I recall being unsatisfied with the weak 1024-bit RSA subkeys at the time my primary key was a somewhat stronger 1280-bit RSA key created back in 2002 but OpenPGP cards at the time didn t support more than 1024 bit RSA, and were (and still often are) also limited to power-of-two RSA key sizes which I dislike.
I had my master key on disk with a strong password for a while, mostly to refresh expiration time of the subkeys and to sign other s OpenPGP keys. At some point I stopped carrying around encrypted copies of my master key. That was my main setup when I migrated to a new stronger RSA 3744 bit key with rsa2048 subkeys on a YubiKey NEO back in 2014. At that point, signing other s OpenPGP keys was a rare enough occurrence that I settled with bringing out my offline machine to perform this operation, transferring the public key to sign on USB sticks. In 2019 I re-evaluated my OpenPGP setup and ended up creating a offline Ed25519 key with subkeys on a FST-01G running Gnuk. My approach for signing other s OpenPGP keys were still to bring out my offline machine and sign things using the master secret using USB sticks for storage and transport. Which meant I almost never did that, because it took too much effort. So my 2019-era Ed25519 key still only has a handful of signatures on it, since I had essentially stopped signing other s keys which is the traditional way of getting signatures in return.
None of this caused any critical problem for me because I continued to use my old 2014-era RSA3744 key in parallel with my new 2019-era Ed25519 key, since too many systems didn t handle Ed25519. However, during 2022 this changed, and the only remaining environment that I still used my RSA3744 key for was in Debian and they require OpenPGP signatures on the new key to allow it to replace an older key. I was in denial about this sub-optimal solution during 2022 and endured its practical consequences, having to use the YubiKey NEO (which I had replaced with a permanently inserted YubiKey Nano at some point) for Debian-related purposes alone.
In December 2022 I bought a new laptop and setup a FST-01SZ with my Ed25519 key, and while I have taken a vacation from Debian, I continue to extend the expiration period on the old RSA3744-key in case I will ever have to use it again, so the overall OpenPGP setup was still sub-optimal. Having two valid OpenPGP keys at the same time causes people to use both for email encryption (leading me to have to use both devices), and the WKD Key Discovery protocol doesn t like two valid keys either. At FOSDEM 23 I ran into Andre Heinecke at GnuPG and I couldn t help complain about how complex and unsatisfying all OpenPGP-related matters were, and he mildly ignored my rant and asked why I didn t put the master key on another smartcard. The comment sunk in when I came home, and recently I connected all the dots and this post is a summary of what I did to move my offline OpenPGP master key to a Nitrokey Start.
First a word about device choice, I still prefer to use hardware devices that are as compatible with free software as possible, but the FST-01G or FST-01SZ are no longer easily available for purchase. I got a comment about Nitrokey start in my last post, and had two of them available to experiment with. There are things to dislike with the Nitrokey Start compared to the YubiKey (e.g., relative insecure chip architecture, the bulkier form factor and lack of FIDO/U2F/OATH support) but as far as I know there is no more widely available owner-controlled device that is manufactured for an intended purpose of implementing an OpenPGP card. Thus it hits the sweet spot for me.
The first step is to run latest firmware on the Nitrokey Start for bug-fixes and important OpenSSH 9.0 compatibility and there are reproducible-built firmware published that you can install using pynitrokey. I run Trisquel 11 aramo on my laptop, which does not include the Python Pip package (likely because it promotes installing non-free software) so that was a slight complication. Building the firmware locally may have worked, and I would like to do that eventually to confirm the published firmware, however to save time I settled with installing the Ubuntu 22.04 packages on my machine:
$ sha256sum python3-pip*
ded6b3867a4a4cbaff0940cab366975d6aeecc76b9f2d2efa3deceb062668b1c python3-pip_22.0.2+dfsg-1ubuntu0.2_all.deb
e1561575130c41dc3309023a345de337e84b4b04c21c74db57f599e267114325 python3-pip-whl_22.0.2+dfsg-1ubuntu0.2_all.deb
$ doas dpkg -i python3-pip*
...
$ doas apt install -f
...
$
Installing pynitrokey downloaded a bunch of dependencies, and it would be nice to audit the license and security vulnerabilities for each of them. (Verbose output below slightly redacted.)
jas@kaka:~$ pip3 install --user pynitrokey
Collecting pynitrokey
Downloading pynitrokey-0.4.34-py3-none-any.whl (572 kB)
Collecting frozendict~=2.3.4
Downloading frozendict-2.3.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (113 kB)
Requirement already satisfied: click<9,>=8.0.0 in /usr/lib/python3/dist-packages (from pynitrokey) (8.0.3)
Collecting ecdsa
Downloading ecdsa-0.18.0-py2.py3-none-any.whl (142 kB)
Collecting python-dateutil~=2.7.0
Downloading python_dateutil-2.7.5-py2.py3-none-any.whl (225 kB)
Collecting fido2<2,>=1.1.0
Downloading fido2-1.1.0-py3-none-any.whl (201 kB)
Collecting tlv8
Downloading tlv8-0.10.0.tar.gz (16 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: certifi>=14.5.14 in /usr/lib/python3/dist-packages (from pynitrokey) (2020.6.20)
Requirement already satisfied: pyusb in /usr/lib/python3/dist-packages (from pynitrokey) (1.2.1.post1)
Collecting urllib3~=1.26.7
Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting spsdk<1.8.0,>=1.7.0
Downloading spsdk-1.7.1-py3-none-any.whl (684 kB)
Collecting typing_extensions~=4.3.0
Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB)
Requirement already satisfied: cryptography<37,>=3.4.4 in /usr/lib/python3/dist-packages (from pynitrokey) (3.4.8)
Collecting intelhex
Downloading intelhex-2.3.0-py2.py3-none-any.whl (50 kB)
Collecting nkdfu
Downloading nkdfu-0.2-py3-none-any.whl (16 kB)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from pynitrokey) (2.25.1)
Collecting tqdm
Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting nrfutil<7,>=6.1.4
Downloading nrfutil-6.1.7.tar.gz (845 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: cffi in /usr/lib/python3/dist-packages (from pynitrokey) (1.15.0)
Collecting crcmod
Downloading crcmod-1.7.tar.gz (89 kB)
Preparing metadata (setup.py) ... done
Collecting libusb1==1.9.3
Downloading libusb1-1.9.3-py3-none-any.whl (60 kB)
Collecting pc_ble_driver_py>=0.16.4
Downloading pc_ble_driver_py-0.17.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.9 MB)
Collecting piccata
Downloading piccata-2.0.3-py3-none-any.whl (21 kB)
Collecting protobuf<4.0.0,>=3.17.3
Downloading protobuf-3.20.3-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
Collecting pyserial
Downloading pyserial-3.5-py2.py3-none-any.whl (90 kB)
Collecting pyspinel>=1.0.0a3
Downloading pyspinel-1.0.3.tar.gz (58 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from nrfutil<7,>=6.1.4->pynitrokey) (5.4.1)
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil~=2.7.0->pynitrokey) (1.16.0)
Collecting pylink-square<0.11.9,>=0.8.2
Downloading pylink_square-0.11.1-py2.py3-none-any.whl (78 kB)
Collecting jinja2<3.1,>=2.11
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting bincopy<17.11,>=17.10.2
Downloading bincopy-17.10.3-py3-none-any.whl (17 kB)
Collecting fastjsonschema>=2.15.1
Downloading fastjsonschema-2.16.3-py3-none-any.whl (23 kB)
Collecting astunparse<2,>=1.6
Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting oscrypto~=1.2
Downloading oscrypto-1.3.0-py2.py3-none-any.whl (194 kB)
Collecting deepmerge==0.3.0
Downloading deepmerge-0.3.0-py2.py3-none-any.whl (7.6 kB)
Collecting pyocd<=0.31.0,>=0.28.3
Downloading pyocd-0.31.0-py3-none-any.whl (12.5 MB)
Collecting click-option-group<0.6,>=0.3.0
Downloading click_option_group-0.5.5-py3-none-any.whl (12 kB)
Collecting pycryptodome<4,>=3.9.3
Downloading pycryptodome-3.17-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
Collecting pyocd-pemicro<1.2.0,>=1.1.1
Downloading pyocd_pemicro-1.1.5-py3-none-any.whl (9.0 kB)
Requirement already satisfied: colorama<1,>=0.4.4 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (0.4.4)
Collecting commentjson<1,>=0.9
Downloading commentjson-0.9.0.tar.gz (8.7 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: asn1crypto<2,>=1.2 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.0)
Collecting pypemicro<0.2.0,>=0.1.9
Downloading pypemicro-0.1.11-py3-none-any.whl (5.7 MB)
Collecting libusbsio>=2.1.11
Downloading libusbsio-2.1.11-py3-none-any.whl (247 kB)
Collecting sly==0.4
Downloading sly-0.4.tar.gz (60 kB)
Preparing metadata (setup.py) ... done
Collecting ruamel.yaml<0.18.0,>=0.17
Downloading ruamel.yaml-0.17.21-py3-none-any.whl (109 kB)
Collecting cmsis-pack-manager<0.3.0
Downloading cmsis_pack_manager-0.2.10-py2.py3-none-manylinux1_x86_64.whl (25.1 MB)
Collecting click-command-tree==1.1.0
Downloading click_command_tree-1.1.0-py3-none-any.whl (3.6 kB)
Requirement already satisfied: bitstring<3.2,>=3.1 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (3.1.7)
Collecting hexdump~=3.3
Downloading hexdump-3.3.zip (12 kB)
Preparing metadata (setup.py) ... done
Collecting fire
Downloading fire-0.5.0.tar.gz (88 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/lib/python3/dist-packages (from astunparse<2,>=1.6->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.37.1)
Collecting humanfriendly
Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
Collecting argparse-addons>=0.4.0
Downloading argparse_addons-0.12.0-py3-none-any.whl (3.3 kB)
Collecting pyelftools
Downloading pyelftools-0.29-py2.py3-none-any.whl (174 kB)
Collecting milksnake>=0.1.2
Downloading milksnake-0.1.5-py2.py3-none-any.whl (9.6 kB)
Requirement already satisfied: appdirs>=1.4 in /usr/lib/python3/dist-packages (from cmsis-pack-manager<0.3.0->spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.4)
Collecting lark-parser<0.8.0,>=0.7.1
Downloading lark-parser-0.7.8.tar.gz (276 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: MarkupSafe>=2.0 in /usr/lib/python3/dist-packages (from jinja2<3.1,>=2.11->spsdk<1.8.0,>=1.7.0->pynitrokey) (2.0.1)
Collecting asn1crypto<2,>=1.2
Downloading asn1crypto-1.5.1-py2.py3-none-any.whl (105 kB)
Collecting wrapt
Downloading wrapt-1.15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (78 kB)
Collecting future
Downloading future-0.18.3.tar.gz (840 kB)
Preparing metadata (setup.py) ... done
Collecting psutil>=5.2.2
Downloading psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (280 kB)
Collecting capstone<5.0,>=4.0
Downloading capstone-4.0.2-py2.py3-none-manylinux1_x86_64.whl (2.1 MB)
Collecting naturalsort<2.0,>=1.5
Downloading naturalsort-1.5.1.tar.gz (7.4 kB)
Preparing metadata (setup.py) ... done
Collecting prettytable<3.0,>=2.0
Downloading prettytable-2.5.0-py3-none-any.whl (24 kB)
Collecting intervaltree<4.0,>=3.0.2
Downloading intervaltree-3.1.0.tar.gz (32 kB)
Preparing metadata (setup.py) ... done
Collecting ruamel.yaml.clib>=0.2.6
Downloading ruamel.yaml.clib-0.2.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (485 kB)
Collecting termcolor
Downloading termcolor-2.2.0-py3-none-any.whl (6.6 kB)
Collecting sortedcontainers<3.0,>=2.0
Downloading sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Requirement already satisfied: wcwidth in /usr/lib/python3/dist-packages (from prettytable<3.0,>=2.0->pyocd<=0.31.0,>=0.28.3->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.2.5)
Building wheels for collected packages: nrfutil, crcmod, sly, tlv8, commentjson, hexdump, pyspinel, fire, intervaltree, lark-parser, naturalsort, future
Building wheel for nrfutil (setup.py) ... done
Created wheel for nrfutil: filename=nrfutil-6.1.7-py3-none-any.whl size=898520 sha256=de6f8803f51d6c26d24dc7df6292064a468ff3f389d73370433fde5582b84a10
Stored in directory: /home/jas/.cache/pip/wheels/39/2b/9b/98ab2dd716da746290e6728bdb557b14c1c9a54cb9ed86e13b
Building wheel for crcmod (setup.py) ... done
Created wheel for crcmod: filename=crcmod-1.7-cp310-cp310-linux_x86_64.whl size=31422 sha256=5149ac56fcbfa0606760eef5220fcedc66be560adf68cf38c604af3ad0e4a8b0
Stored in directory: /home/jas/.cache/pip/wheels/85/4c/07/72215c529bd59d67e3dac29711d7aba1b692f543c808ba9e86
Building wheel for sly (setup.py) ... done
Created wheel for sly: filename=sly-0.4-py3-none-any.whl size=27352 sha256=f614e413918de45c73d1e9a8dca61ca07dc760d9740553400efc234c891f7fde
Stored in directory: /home/jas/.cache/pip/wheels/a2/23/4a/6a84282a0d2c29f003012dc565b3126e427972e8b8157ea51f
Building wheel for tlv8 (setup.py) ... done
Created wheel for tlv8: filename=tlv8-0.10.0-py3-none-any.whl size=11266 sha256=3ec8b3c45977a3addbc66b7b99e1d81b146607c3a269502b9b5651900a0e2d08
Stored in directory: /home/jas/.cache/pip/wheels/e9/35/86/66a473cc2abb0c7f21ed39c30a3b2219b16bd2cdb4b33cfc2c
Building wheel for commentjson (setup.py) ... done
Created wheel for commentjson: filename=commentjson-0.9.0-py3-none-any.whl size=12092 sha256=28b6413132d6d7798a18cf8c76885dc69f676ea763ffcb08775a3c2c43444f4a
Stored in directory: /home/jas/.cache/pip/wheels/7d/90/23/6358a234ca5b4ec0866d447079b97fedf9883387d1d7d074e5
Building wheel for hexdump (setup.py) ... done
Created wheel for hexdump: filename=hexdump-3.3-py3-none-any.whl size=8913 sha256=79dfadd42edbc9acaeac1987464f2df4053784fff18b96408c1309b74fd09f50
Stored in directory: /home/jas/.cache/pip/wheels/26/28/f7/f47d7ecd9ae44c4457e72c8bb617ef18ab332ee2b2a1047e87
Building wheel for pyspinel (setup.py) ... done
Created wheel for pyspinel: filename=pyspinel-1.0.3-py3-none-any.whl size=65033 sha256=01dc27f81f28b4830a0cf2336dc737ef309a1287fcf33f57a8a4c5bed3b5f0a6
Stored in directory: /home/jas/.cache/pip/wheels/95/ec/4b/6e3e2ee18e7292d26a65659f75d07411a6e69158bb05507590
Building wheel for fire (setup.py) ... done
Created wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116951 sha256=3d288585478c91a6914629eb739ea789828eb2d0267febc7c5390cb24ba153e8
Stored in directory: /home/jas/.cache/pip/wheels/90/d4/f7/9404e5db0116bd4d43e5666eaa3e70ab53723e1e3ea40c9a95
Building wheel for intervaltree (setup.py) ... done
Created wheel for intervaltree: filename=intervaltree-3.1.0-py2.py3-none-any.whl size=26119 sha256=5ff1def22ba883af25c90d90ef7c6518496fcd47dd2cbc53a57ec04cd60dc21d
Stored in directory: /home/jas/.cache/pip/wheels/fa/80/8c/43488a924a046b733b64de3fac99252674c892a4c3801c0a61
Building wheel for lark-parser (setup.py) ... done
Created wheel for lark-parser: filename=lark_parser-0.7.8-py2.py3-none-any.whl size=62527 sha256=3d2ec1d0f926fc2688d40777f7ef93c9986f874169132b1af590b6afc038f4be
Stored in directory: /home/jas/.cache/pip/wheels/29/30/94/33e8b58318aa05cb1842b365843036e0280af5983abb966b83
Building wheel for naturalsort (setup.py) ... done
Created wheel for naturalsort: filename=naturalsort-1.5.1-py3-none-any.whl size=7526 sha256=bdecac4a49f2416924548cae6c124c85d5333e9e61c563232678ed182969d453
Stored in directory: /home/jas/.cache/pip/wheels/a6/8e/c9/98cfa614fff2979b457fa2d9ad45ec85fa417e7e3e2e43be51
Building wheel for future (setup.py) ... done
Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492037 sha256=57a01e68feca2b5563f5f624141267f399082d2f05f55886f71b5d6e6cf2b02c
Stored in directory: /home/jas/.cache/pip/wheels/5e/a9/47/f118e66afd12240e4662752cc22cefae5d97275623aa8ef57d
Successfully built nrfutil crcmod sly tlv8 commentjson hexdump pyspinel fire intervaltree lark-parser naturalsort future
Installing collected packages: tlv8, sortedcontainers, sly, pyserial, pyelftools, piccata, naturalsort, libusb1, lark-parser, intelhex, hexdump, fastjsonschema, crcmod, asn1crypto, wrapt, urllib3, typing_extensions, tqdm, termcolor, ruamel.yaml.clib, python-dateutil, pyspinel, pypemicro, pycryptodome, psutil, protobuf, prettytable, oscrypto, milksnake, libusbsio, jinja2, intervaltree, humanfriendly, future, frozendict, fido2, ecdsa, deepmerge, commentjson, click-option-group, click-command-tree, capstone, astunparse, argparse-addons, ruamel.yaml, pyocd-pemicro, pylink-square, pc_ble_driver_py, fire, cmsis-pack-manager, bincopy, pyocd, nrfutil, nkdfu, spsdk, pynitrokey
WARNING: The script nitropy is installed in '/home/jas/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed argparse-addons-0.12.0 asn1crypto-1.5.1 astunparse-1.6.3 bincopy-17.10.3 capstone-4.0.2 click-command-tree-1.1.0 click-option-group-0.5.5 cmsis-pack-manager-0.2.10 commentjson-0.9.0 crcmod-1.7 deepmerge-0.3.0 ecdsa-0.18.0 fastjsonschema-2.16.3 fido2-1.1.0 fire-0.5.0 frozendict-2.3.5 future-0.18.3 hexdump-3.3 humanfriendly-10.0 intelhex-2.3.0 intervaltree-3.1.0 jinja2-3.0.3 lark-parser-0.7.8 libusb1-1.9.3 libusbsio-2.1.11 milksnake-0.1.5 naturalsort-1.5.1 nkdfu-0.2 nrfutil-6.1.7 oscrypto-1.3.0 pc_ble_driver_py-0.17.0 piccata-2.0.3 prettytable-2.5.0 protobuf-3.20.3 psutil-5.9.4 pycryptodome-3.17 pyelftools-0.29 pylink-square-0.11.1 pynitrokey-0.4.34 pyocd-0.31.0 pyocd-pemicro-1.1.5 pypemicro-0.1.11 pyserial-3.5 pyspinel-1.0.3 python-dateutil-2.7.5 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.7 sly-0.4 sortedcontainers-2.4.0 spsdk-1.7.1 termcolor-2.2.0 tlv8-0.10.0 tqdm-4.65.0 typing_extensions-4.3.0 urllib3-1.26.15 wrapt-1.15.0
jas@kaka:~$
Then upgrading the device worked remarkable well, although I wish that the tool would have printed URLs and checksums for the firmware files to allow easy confirmation.
jas@kaka:~$ PATH=$PATH:/home/jas/.local/bin
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.15-5D271572: Nitrokey Nitrokey Start (RTM.12.1-RC2-modified)
jas@kaka:~$ nitropy start update
Command line tool to interact with Nitrokey devices 0.4.34
Nitrokey Start firmware update tool
Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
System: Linux, is_linux: True
Python: 3.10.6
Saving run log to: /tmp/nitropy.log.gc5753a8
Admin PIN:
Firmware data to be used:
- FirmwareType.REGNUAL: 4408, hash: ...b'72a30389' valid (from ...built/RTM.13/regnual.bin)
- FirmwareType.GNUK: 129024, hash: ...b'25a4289b' valid (from ...prebuilt/RTM.13/gnuk.bin)
Currently connected device strings:
Device:
Vendor: Nitrokey
Product: Nitrokey Start
Serial: FSIJ-1.2.15-5D271572
Revision: RTM.12.1-RC2-modified
Config: *:*:8e82
Sys: 3.0
Board: NITROKEY-START-G
initial device strings: [ 'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.15-5D271572', 'Revision': 'RTM.12.1-RC2-modified', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G' ]
Please note:
- Latest firmware available is:
RTM.13 (published: 2022-12-08T10:59:11Z)
- provided firmware: None
- all data will be removed from the device!
- do not interrupt update process - the device may not run properly!
- the process should not take more than 1 minute
Do you want to continue? [yes/no]: yes
...
Starting bootloader upload procedure
Device: Nitrokey Start FSIJ-1.2.15-5D271572
Connected to the device
Running update!
Do NOT remove the device from the USB slot, until further notice
Downloading flash upgrade program...
Executing flash upgrade...
Waiting for device to appear:
Wait 20 seconds.....
Downloading the program
Protecting device
Finish flashing
Resetting device
Update procedure finished. Device could be removed from USB slot.
Currently connected device strings (after upgrade):
Device:
Vendor: Nitrokey
Product: Nitrokey Start
Serial: FSIJ-1.2.19-5D271572
Revision: RTM.13
Config: *:*:8e82
Sys: 3.0
Board: NITROKEY-START-G
device can now be safely removed from the USB slot
final device strings: [ 'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.19-5D271572', 'Revision': 'RTM.13', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G' ]
finishing session 2023-03-16 21:49:07.371291
Log saved to: /tmp/nitropy.log.gc5753a8
jas@kaka:~$
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.19-5D271572: Nitrokey Nitrokey Start (RTM.13)
jas@kaka:~$
Before importing the master key to this device, it should be configured. Note the commands in the beginning to make sure scdaemon/pcscd is not running because they may have cached state from earlier cards. Change PIN code as you like after this, my experience with Gnuk was that the Admin PIN had to be changed first, then you import the key, and then you change the PIN.
jas@kaka:~$ gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye
OK
ERR 67125247 Slut p fil <GPG Agent>
jas@kaka:~$ ps auxww grep -e pcsc -e scd
jas 11651 0.0 0.0 3468 1672 pts/0 R+ 21:54 0:00 grep --color=auto -e pcsc -e scd
jas@kaka:~$ gpg --card-edit
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: [not set]
Language prefs ...: [not set]
Salutation .......:
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: off
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
gpg/card> admin
Admin commands are allowed
gpg/card> kdf-setup
gpg/card> passwd
gpg: OpenPGP card no. D276000124010200FFFE5D2715720000 detected
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
Your selection? 3
PIN changed.
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
Your selection? q
gpg/card> name
Cardholder's surname: Josefsson
Cardholder's given name: Simon
gpg/card> lang
Language preferences: sv
gpg/card> sex
Salutation (M = Mr., F = Ms., or space): m
gpg/card> login
Login data (account name): jas
gpg/card> url
URL to retrieve public key: https://josefsson.org/key-20190320.txt
gpg/card> forcesig
gpg/card> key-attr
Changing card key attribute for: Signature key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
Note: There is no guarantee that the card supports the requested size.
If the key generation does not succeed, please check the
documentation of your card to see what sizes are allowed.
Changing card key attribute for: Encryption key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: cv25519
Changing card key attribute for: Authentication key
Please select what kind of key you want:
(1) RSA
(2) ECC
Your selection? 2
Please select which elliptic curve you want:
(1) Curve 25519
(4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
gpg/card>
jas@kaka:~$ gpg --card-edit
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: on
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
jas@kaka:~$
Once setup, bring out your offline machine and boot it and mount your USB stick with the offline key. The paths below will be different, and this is using a somewhat unorthodox approach of working with fresh GnuPG configuration paths that I chose for the USB stick.
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ cp -a gnupghome-backup-masterkey gnupghome-import-nitrokey-5D271572
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ gpg --homedir $PWD/gnupghome-import-nitrokey-5D271572 --edit-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Secret key is available.
sec ed25519/D73CF638C53C06BE
created: 2019-03-20 expired: 2019-10-22 usage: SC
trust: ultimate validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>
gpg> keytocard
Really move the primary key? (y/N) y
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1
sec ed25519/D73CF638C53C06BE
created: 2019-03-20 expired: 2019-10-22 usage: SC
trust: ultimate validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>
gpg>
Save changes? (y/N) y
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$
At this point it is useful to confirm that the Nitrokey has the master key available and that is possible to sign statements with it, back on your regular machine:
jas@kaka:~$ gpg --card-status
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 1
KDF setting ......: on
Signature key ....: B1D2 BD13 75BE CB78 4CF4 F8C4 D73C F638 C53C 06BE
created ....: 2019-03-20 23:37:24
Encryption key....: [none]
Authentication key: [none]
General key info..: pub ed25519/D73CF638C53C06BE 2019-03-20 Simon Josefsson <simon@josefsson.org>
sec> ed25519/D73CF638C53C06BE created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 5D271572
ssb> ed25519/80260EE8A9B92B2B created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
ssb> ed25519/51722B08FE4745A2 created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
ssb> cv25519/02923D7EE76EBD60 created: 2019-03-20 expires: 2023-09-19
card-no: FFFE 42315277
jas@kaka:~$ echo foo gpg -a --sign gpg --verify
gpg: Signature made Thu Mar 16 22:11:02 2023 CET
gpg: using EDDSA key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg: Good signature from "Simon Josefsson <simon@josefsson.org>" [ultimate]
jas@kaka:~$
Finally to retrieve and sign a key, for example Andre Heinecke s that I could confirm the OpenPGP key identifier from his business card.
jas@kaka:~$ gpg --locate-external-keys aheinecke@gnupg.com
gpg: key 1FDF723CF462B6B1: public key "Andre Heinecke <aheinecke@gnupg.com>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 2 signed: 7 trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: depth: 1 valid: 7 signed: 64 trust: 7-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2023-05-26
pub rsa3072 2015-12-08 [SC] [expires: 2025-12-05]
94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1
uid [ unknown] Andre Heinecke <aheinecke@gnupg.com>
sub ed25519 2017-02-13 [S]
sub ed25519 2017-02-13 [A]
sub rsa3072 2015-12-08 [E] [expires: 2025-12-05]
sub rsa3072 2015-12-08 [A] [expires: 2025-12-05]
jas@kaka:~$ gpg --edit-key "94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1"
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa3072/1FDF723CF462B6B1
created: 2015-12-08 expires: 2025-12-05 usage: SC
trust: unknown validity: unknown
sub ed25519/2978E9D40CBABA5C
created: 2017-02-13 expires: never usage: S
sub ed25519/DC74D901C8E2DD47
created: 2017-02-13 expires: never usage: A
The following key was revoked on 2017-02-23 by RSA key 1FDF723CF462B6B1 Andre Heinecke <aheinecke@gnupg.com>
sub cv25519/1FFE3151683260AB
created: 2017-02-13 revoked: 2017-02-23 usage: E
sub rsa3072/8CC999BDAA45C71F
created: 2015-12-08 expires: 2025-12-05 usage: E
sub rsa3072/6304A4B539CE444A
created: 2015-12-08 expires: 2025-12-05 usage: A
[ unknown] (1). Andre Heinecke <aheinecke@gnupg.com>
gpg> sign
pub rsa3072/1FDF723CF462B6B1
created: 2015-12-08 expires: 2025-12-05 usage: SC
trust: unknown validity: unknown
Primary key fingerprint: 94A5 C9A0 3C2F E5CA 3B09 5D8E 1FDF 723C F462 B6B1
Andre Heinecke <aheinecke@gnupg.com>
This key is due to expire on 2025-12-05.
Are you sure that you want to sign this key with your
key "Simon Josefsson <simon@josefsson.org>" (D73CF638C53C06BE)
Really sign? (y/N) y
gpg> quit
Save changes? (y/N) y
jas@kaka:~$
This is on my day-to-day machine, using the NitroKey Start with the offline key. No need to boot the old offline machine just to sign keys or extend expiry anymore! At FOSDEM 23 I managed to get at least one DD signature on my new key, and the Debian keyring maintainers accepted my Ed25519 key. Hopefully I can now finally let my 2014-era RSA3744 key expire in 2023-09-19 and not extend it any further. This should finish my transition to a simpler OpenPGP key setup, yay!
The Framework is a 13.5" laptop body with swappable parts, which
makes it somewhat future-proof and certainly easily repairable,
scoring an "exceedingly rare" 10/10 score from ifixit.com.
There are two generations of the laptop's main board (both compatible
with the same body): the Intel 11th and 12th gen chipsets.
I have received my Framework, 12th generation "DIY", device in late
September 2022 and will update this page as I go along in the process
of ordering, burning-in, setting up and using the device over the
years.
Overall, the Framework is a good laptop. I like the keyboard, the
touch pad, the expansion cards. Clearly there's been some good work
done on industrial design, and it's the most repairable laptop I've
had in years. Time will tell, but it looks sturdy enough to survive me
many years as well.
This is also one of the most powerful devices I ever lay my hands
on. I have managed, remotely, more powerful servers, but this is the
fastest computer I have ever owned, and it fits in this tiny case. It
is an amazing machine.
On the downside, there's a bit of proprietary firmware required (WiFi,
Bluetooth, some graphics) and the Framework ships with a proprietary
BIOS, with currently no Coreboot support. Expect to need the
latest kernel, firmware, and hacking around a bunch of things to get
resolution and keybindings working right.
Like others, I have first found significant power management issues,
but many issues can actually be solved with some configuration. Some
of the expansion ports (HDMI, DP, MicroSD, and SSD) use power when
idle, so don't expect week-long suspend, or "full day" battery while
those are plugged in.
Finally, the expansion ports are nice, but there's only four of
them. If you plan to have a two-monitor setup, you're likely going to
need a dock.
Read on for the detailed review. For context, I'm moving from the
Purism Librem 13v4 because it
basically exploded on me. I
had, in the meantime, reverted back to an old ThinkPad X220, so I
sometimes compare the Framework with that venerable laptop as well.
This blog post has been maturing for months now. It started in
September 2022 and I declared it completed in March 2023. It's the
longest single article on this entire website, currently clocking at
about 13,000 words. It will take an average reader a full hour to go
through this thing, so I don't expect anyone to actually do
that. This introduction should be good enough for most people, read
the first section if you intend to actually buy a Framework. Jump
around the table of contents as you see fit for after you did buy the
laptop, as it might include some crucial hints on how to make it work
best for you, especially on (Debian) Linux.
Advice for buyers
Those are things I wish I would have known before buying:
consider buying 4 USB-C expansion cards, or at least a mix of 4
USB-A or USB-C cards, as they use less power than other cards and
you do want to fill those expansion slots otherwise they snag
around and feel insecure
you will likely need a dock or at least a USB hub if you want a
two-monitor setup, otherwise you'll run out of ports
you have to do some serious tuning to get proper (10h+ idle, 10
days suspend) power savings
in particular, beware that the HDMI, DisplayPort and
particularly the SSD and MicroSD cards take a significant amount
power, even when sleeping, up to 2-6W for the latter two
beware that the MicroSD card is what it says: Micro, normal SD
cards won't fit, and while there might be full sized one
eventually, it's currently only at the prototyping stage
Current status
I have the framework! It's setup with a fresh new Debian bookworm
installation. I've ran through a large number of tests and burn in.
I have decided to use the Framework as my daily driver, and had to buy
a USB-C dock to get my two monitors
connected, which was own adventure.
Update: Framework just (2023-03-23) just announced a whole bunch of
new stuff:
The recording is available in this video and it's not your
typical keynote. It starts ~25 minutes late, audio is crap, lightning
and camera are crap, clapping seems to be from whatever staff they
managed to get together in a room, decor is bizarre, colors are
shit. It's amazing.
Specifications
Those are the specifications of the 12th gen, in general terms. Your
build will of course vary according to your needs.
CPU: i5-1240P, i7-1260P, or i7-1280P (Up to 4.4-4.8 GHz, 4+8
cores), Iris Xe graphics
4 x USB-C user-selectable expansion ports, including
USB-C
USB-A
HDMI
DP
Ethernet
MicroSD
250-1000GB SSD
3.5mm combo headphone jack
Kill switches for microphone and camera
Battery: 55Wh
Camera: 1080p 60fps
Biometrics: Fingerprint Reader
Backlit keyboard
Power Adapter: 60W USB-C (or bring your own)
ships with a screwdriver/spludger
1 year warranty
base price: 1000$CAD, but doesn't give you much, typical builds
around 1500-2000$CAD
Actual build
This is the actual build I ordered. Amounts in CAD. (1CAD =
~0.75EUR/USD.)
Base configuration
CPU: Intel Core i5-1240P (AKA Alder Lake P 8 4.4GHz
P-threads, 8 3.2GHz E-threads, 16 total, 28-64W), 1079$
Memory: 16GB (1 x 16GB) DDR4-3200, 104$
Customization
Keyboard: US English, included
Expansion Cards
2 USB-C $24
3 USB-A $36
2 HDMI $50
1 DP $50
1 MicroSD $25
1 Storage 1TB $199
Sub-total: 384$
Accessories
Power Adapter - US/Canada $64.00
Total
Before tax: 1606$
After tax and duties: 1847$
Free shipping
Quick evaluation
This is basically the TL;DR: here, just focusing on broad pros/cons of
the laptop.
Pros
easily repairable (complete with QR codes pointing to repair
guides!), the 11th gen received a 10/10 score from
ifixit.com, which they call "exceedingly rare", the 12th gen
has a similar hardware design and would probably rate similarly
replaceable motherboard!!! can be reused as a NUC-like device, with a
3d-printed case, 12th gen board can be bought standalone and
retrofitted into an 11th gen case
not a passing fad: they made a first laptop with the 11th gen Intel
chipset in 2021, and a second motherboard with the 12th Intel
chipset in 2022
four modular USB-C ports which can fit HDMI, USB-C (pass-through,
can provide power on both sides), USB-A, DisplayPort, MicroSD,
external storage (250GB, 1TB), active modding community
nice power led indicating power level (charging, charged, etc) when
plugged
they used to have some difficulty keeping up with the orders: first
two batches shipped, third batch sold out, fourth batch should have
shipped in October 2021. they generally seem to keep up with
shipping. update (august 2022): they rolled out a second line of
laptops (12th gen), first batch shipped, second batch shipped
late, September 2022 batch was generally on time, see this
spreadsheet for a crowdsourced effort to track those
supply chain issues seem to be under control as of early 2023. I
got the Ethernet expansion card shipped within a week.
compared to my previous laptop (Purism Librem
13v4), it feels strangely
bulkier and heavier; it's actually lighter than the purism (1.3kg
vs 1.4kg) and thinner (15.85mm vs 18mm) but the design of the
Purism laptop (tapered edges) makes it feel thinner
no space for a 2.5" drive
rather bright LED around power button, but can be dimmed in the
BIOS (not low enough to my taste) I got used to it
fan quiet when idle, but can be noisy when running, for example if
you max a CPU for a while
battery described as "mediocre" by Ars Technica (above), confirmed
poor in my tests (see below)
no RJ-45 port, and attempts at designing ones are failing
because the modular plugs are too thin to fit (according to Linux
After Dark), so unlikely to have one in the future
Update: they cracked that nut and ship an 2.5 gbps Ethernet
expansion card with a realtek chipset, without any
firmware blob
a bit pricey for the performance, especially when compared to the
competition (e.g. Dell XPS, Apple M1)
12th gen Intel has glitchy graphics, seems like Intel hasn't fully
landed proper Linux support for that chipset yet
Initial hardware setup
A breeze.
Accessing the board
The internals are accessed through five TorX screws, but there's a nice
screwdriver/spudger that works well enough. The screws actually hold in
place so you can't even lose them.
The first setup is a bit counter-intuitive coming from the Librem
laptop, as I expected the back cover to lift and give me access to the
internals. But instead the screws is release the keyboard and touch
pad assembly, so you actually need to flip the laptop back upright and
lift the assembly off to get access to the internals. Kind of
scary.
I also actually unplugged a connector in lifting the assembly because
I lifted it towards the monitor, while you actually need to lift it
to the right. Thankfully, the connector didn't break, it just
snapped off and I could plug it back in, no harm done.
Once there, everything is well indicated, with QR codes all over the
place supposedly leading to online instructions.
Bad QR codes
Unfortunately, the QR codes I tested (in the expansion card slot, the
memory slot and CPU slots) did not actually work so I wonder how
useful those actually are.
After all, they need to point to something and that means a URL, a
running website that will answer those requests forever. I bet those
will break sooner than later and in fact, as far as I can tell, they
just don't work at all. I prefer the approach taken by the MNT reform
here which designed (with the 100 rabbits folks) an actual paper
handbook (PDF).
The first QR code that's immediately visible from the back of the
laptop, in an expansion cord slot, is a 404. It seems to be some
serial number URL, but I can't actually tell because, well, the page
is a 404.
I was expecting that bar code to lead me to an introduction page,
something like "how to setup your Framework laptop". Support actually
confirmed that it should point a quickstart guide. But in a
bizarre twist, they somehow sent me the URL with the plus (+) signs
escaped, like this:
(They have also "let the team know about this for feedback and help
resolve the problem with the link" which is a support code word for
"ha-ha! nope! not my problem right now!" Trust me, I know, my own
code word is "can you please make a ticket?")
Seating disks and memory
The "DIY" kit doesn't actually have that much of a setup. If you
bought RAM, it's shipped outside the laptop in a little plastic case,
so you just seat it in as usual.
Then you insert your NVMe drive, and, if that's your fancy, you also
install your own mPCI WiFi card. If you ordered one (which was my
case), it's pre-installed.
Closing the laptop is also kind of amazing, because the keyboard
assembly snaps into place with magnets. I have actually used the
laptop with the keyboard unscrewed as I was putting the drives in and
out, and it actually works fine (and will probably void your warranty,
so don't do that). (But you can.) (But don't, really.)
Hardware review
Keyboard and touch pad
The keyboard feels nice, for a laptop. I'm used to mechanical keyboard
and I'm rather violent with those poor things. Yet the key travel is
nice and it's clickety enough that I don't feel too disoriented.
At first, I felt the keyboard as being more laggy than my normal
workstation setup, but it turned out this was a graphics driver
issues. After enabling a composition manager, everything feels snappy.
The touch pad feels good. The double-finger scroll works well enough,
and I don't have to wonder too much where the middle button is, it
just works.
Taps don't work, out of the box: that needs to be enabled in Xorg,
with something like this:
But be aware that once you enable that tapping, you'll need to deal
with palm detection... So I have not actually enabled this in the end.
Power button
The power button is a little dangerous. It's quite easy to hit, as
it's right next to one expansion card where you are likely to plug in
a cable power. And because the expansion cards are kind of hard to
remove, you might squeeze the laptop (and the power key) when trying
to remove the expansion card next to the power button.
So obviously, don't do that. But that's not very helpful.
An alternative is to make the power button do something else. With
systemd-managed systems, it's actually quite easy. Add a
HandlePowerKey stanza to (say)
/etc/systemd/logind.conf.d/power-suspends.conf:
And the power button will suspend! Long-press to power off doesn't
actually work as the laptop immediately suspends...
Note that there's probably half a dozen other ways of doing this,
see this, this, or that.
Special keybindings
There is a series of "hidden" (as in: not labeled on the key)
keybindings related to the fn keybinding that I actually
find quite useful.
Key
Equivalent
Effect
Command
p
Pause
lock screen
xset s activate
b
Break
?
?
k
ScrLk
switch keyboard layout
N/A
It looks like those are defined in the microcontroller so it
would be possible to add some. For example, the SysRq key
is almost bound to fns in there.
Note that most other shortcuts like this are clearly documented
(volume, brightness, etc). One key that's less obvious is
F12 that only has the Framework logo on it. That actually
calls the keysym XF86AudioMedia which, interestingly, does
absolutely nothing here. By default, on Windows, it opens your
browser to the Framework website and, on Linux, your "default
media player".
The keyboard backlight can be cycled with fn-space. The
dimmer version is dim enough, and the keybinding is easy to find in
the dark.
A skinny elephant would be performed with altPrtScr (above F11) KEY, so for
example altfnF11b
should do a hard reset. This comment suggests you need to hold
the fnonly if "function lock" is on, but that's
actually the opposite of my experience.
Out of the box, some of the fn keys don't work. Mute,
volume up/down, brightness, monitor changes, and the airplane mode key
all do basically nothing. They don't send proper keysyms to Xorg at
all.
This is a known problem and it's related to the fact that the
laptop has light sensors to adjust the brightness
automatically. Somehow some of those keys (e.g. the brightness
controls) are supposed to show up as a different input device, but
don't seem to work correctly. It seems like the solution is for the
Framework team to write a driver specifically for this, but so far no
progress since July 2022.
In the meantime, the fancy functionality can be supposedly disabled with:
echo 'blacklist hid_sensor_hub' sudo tee /etc/modprobe.d/framework-als-blacklist.conf
Kill switches
The Framework has two "kill switches": one for the camera and the
other for the microphone. The camera one actually disconnects the USB
device when turned off, and the mic one seems to cut the circuit. It
doesn't show up as muted, it just stops feeding the sound.
Both kill switches are around the main camera, on top of the monitor,
and quite discreet. Then turn "red" when enabled (i.e. "red" means
"turned off").
Monitor
The monitor looks pretty good to my untrained eyes. I have yet to do
photography work on it, but some photos I looked at look sharp and the
colors are bright and lively. The blacks are dark and the screen is
bright.
I have yet to use it in full sunlight.
The dimmed light is very dim, which I like.
Screen backlight
I bind brightness keys to xbacklight in i3, but out of the box I get
this error:
sep 29 22:09:14 angela i3[5661]: No outputs have backlight property
It just requires this blob in /etc/X11/xorg.conf.d/backlight.conf:
This way I can control the actual backlight power with the brightness
keys, and they do significantly reduce power usage.
Multiple monitor support
I have been able to hook up my two old monitors to the HDMI and
DisplayPort expansion cards on the laptop. The lid closes without
suspending the machine, and everything works great.
I actually run out of ports, even with a 4-port USB-A hub, which gives
me a total of 7 ports:
power (USB-C)
monitor 1 (DisplayPort)
monitor 2 (HDMI)
USB-A hub, which adds:
keyboard (USB-A)
mouse (USB-A)
Yubikey
external sound card
Now the latter, I might be able to get rid of if I switch to a
combo-jack headset, which I do have (and still need to test).
But still, this is a problem. I'll probably need a powered USB-C dock
and better monitors, possibly with some Thunderbolt chaining, to
save yet more ports.
But that means more money into this setup, argh. And figuring out my
monitor situation is the kind of thing I'm not that big
of a fan of. And neither is shopping for USB-C (or is it Thunderbolt?)
hubs.
My normal autorandr setup doesn't work: I have tried saving a
profile and it doesn't get autodetected, so I also first need to do:
autorandr -l framework-external-dual-lg-acer
The magic:
autorandr -l horizontal
... also works well.
The worst problem with those monitors right now is that they have a
radically smaller resolution than the main screen on the laptop, which
means I need to reset the font scaling to normal every time I switch
back and forth between those monitors and the laptop, which means I
actually need to do this:
Expansion ports
I ordered a total of 10 expansion ports.
I did manage to initialize the 1TB drive as an encrypted storage,
mostly to keep photos as this is something that takes a massive amount
of space (500GB and counting) and that I (unfortunately) don't work on
very often (but still carry around).
The expansion ports are fancy and nice, but not actually that
convenient. They're a bit hard to take out: you really need to crimp
your fingernails on there and pull hard to take them out. There's a
little button next to them to release, I think, but at first it feels
a little scary to pull those pucks out of there. You get used to it
though, and it's one of those things you can do without looking
eventually.
There's only four expansion ports. Once you have two monitors, the
drive, and power plugged in, bam, you're out of ports; there's nowhere
to plug my Yubikey. So if this is going to be my daily driver, with a
dual monitor setup, I will need a dock, which means more crap firmware
and uncertainty, which isn't great. There are actually plans to make a
dual-USB card, but that is blocked on designing an actual
board for this.
I can't wait to see more expansion ports produced. There's a ethernet
expansion card which quickly went out of stock basically the day
it was announced, but was eventually restocked.
I would like to see a proper SD-card reader. There's a MicroSD card
reader, but that obviously doesn't work for normal SD cards, which
would be more broadly compatible anyways (because you can have a
MicroSD to SD card adapter, but I have never heard of the
reverse). Someone actually found a SD card reader that fits and
then someone else managed to cram it in a 3D printed case, which
is kind of amazing.
Still, I really like that idea that I can carry all those little
adapters in a pouch when I travel and can basically do anything I
want. It does mean I need to shuffle through them to find the right
one which is a little annoying. I have an elastic band to keep them
lined up so that all the ports show the same side, to make it easier
to find the right one. But that quickly gets undone and instead I have
a pouch full of expansion cards.
Another awesome thing with the expansion cards is that they don't just
work on the laptop: anything that takes USB-C can take those cards,
which means you can use it to connect an SD card to your phone, for
backups, for example. Heck, you could even connect an external display
to your phone that way, assuming that's supported by your phone of
course (and it probably isn't).
The expansion ports do take up some power, even when idle. See the
power management section below, and particularly the power usage
tests for details.
USB-C charging
One thing that is really a game changer for me is USB-C charging. It's
hard to overstate how convenient this is. I often have a USB-C cable
lying around to charge my phone, and I can just grab that thing and
pop it in my laptop. And while it will obviously not charge as fast as
the provided charger, it will stop draining the battery at least.
(As I wrote this, I had the laptop plugged in the Samsung charger that
came with a phone, and it was telling me it would take 6 hours to
charge the remaining 15%. With the provided charger, that flew down to
15 minutes. Similarly, I can power the laptop from the power grommet
on my desk, reducing clutter as I have that single wire out there
instead of the bulky power adapter.)
I also really like the idea that I can charge my laptop with a power
bank or, heck, with my phone, if push comes to shove. (And
vice-versa!)
This is awesome. And it works from any of the expansion ports, of
course. There's a little led next to the expansion ports as well,
which indicate the charge status:
red/amber: charging
white: charged
off: unplugged
I couldn't find documentation about this, but the forum
answered.
This is something of a recurring theme with the Framework. While it
has a good knowledge base and repair/setup guides (and the
forum is awesome) but it doesn't have a good "owner manual" that
shows you the different parts of the laptop and what they do. Again,
something the MNT reform did well.
Another thing that people are asking about is an external sleep
indicator: because the power LED is on the main keyboard assembly,
you don't actually see whether the device is active or not when the
lid is closed.
Finally, I wondered what happens when you plug in multiple power
sources and it turns out the charge controller is actually pretty
smart: it will pick the best power source and use it. The only
downside is it can't use multiple power sources, but that seems like
a bit much to ask.
Multimedia and other devices
Those things also work:
webcam: splendid, best webcam I've ever had (but my standards are
really low)
onboard mic: works well, good gain (maybe a bit much)
onboard speakers: sound okay, a little metal-ish, loud enough to be
annoying, see this thread for benchmarks, apparently pretty
good speakers
Combo jack mic tests
The Framework laptop ships with a combo jack on the left side, which
allows you to plug in a CTIA (source) headset. In human
terms, it's a device that has both a stereo output and a mono input,
typically a headset or ear buds with a microphone somewhere.
It works, which is better than the Purism (which only had audio
out), but is on par for the course for that kind of onboard
hardware. Because of electrical interference, such sound cards very
often get lots of noise from the board.
With a Jabra Evolve 40, the built-in USB sound card generates
basically zero noise on silence (invisible down to -60dB in Audacity)
while plugging it in directly generates a solid -30dB hiss. There is
a noise-reduction system in that sound card, but the difference is
still quite striking.
On a comparable setup (curie, a 2017 Intel NUC), there is
also a his with the Jabra headset, but it's quieter, more in the order
of -40/-50 dB, a noticeable difference. Interestingly, testing with my
Mee Audio Pro M6 earbuds leads to a little more hiss on curie, more on
the -35/-40 dB range, close to the Framework.
Also note that another sound card, the Antlion USB adapter that comes
with the ModMic 4, also gives me pretty close to silence on a quiet
recording, picking up less than -50dB of background noise. It's
actually probably picking up the fans in the office, which do make
audible noises.
In other words, the hiss of the sound card built in the Framework
laptop is so loud that it makes more noise than the quiet fans in the
office. Or, another way to put it is that two USB sound cards (the
Jabra and the Antlion) are able to pick up ambient noise in my office
but not the Framework laptop.
See also my audio page.
Performance tests
Compiling Linux 5.19.11
On a single core, compiling the Debian version of the Linux kernel
takes around 100 minutes:
I had to plug the normal power supply after a few minutes because
battery would actually run out using my desk's power grommet (34
watts).
During compilation, fans were spinning really hard, quite noisy, but
not painfully so.
The laptop was sucking 55 watts of power, steadily:
Time User Nice Sys Idle IO Run Ctxt/s IRQ/s Fork Exec Exit Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Average 87.9 0.0 10.7 1.4 0.1 17.8 6583.6 5054.3 233.0 223.9 233.1 55.96
GeoMean 87.9 0.0 10.6 1.2 0.0 17.6 6427.8 5048.1 227.6 218.7 227.7 55.96
StdDev 1.4 0.0 1.2 0.6 0.2 3.0 1436.8 255.5 50.0 47.5 49.7 0.20
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Minimum 85.0 0.0 7.8 0.5 0.0 13.0 3594.0 4638.0 117.0 111.0 120.0 55.52
Maximum 90.8 0.0 12.9 3.5 0.8 38.0 10174.0 5901.0 374.0 362.0 375.0 56.41
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU: 55.96 Watts on average with standard deviation 0.20
Note: power read from RAPL domains: package-0, uncore, package-0, core, psys.
These readings do not cover all the hardware in this device.
memtest86+
I ran Memtest86+ v6.00b3. It shows something like this:
Software setup
Once I had everything in the hardware setup, I figured, voil , I'm
done, I'm just going to boot this beautiful machine and I can get back
to work.
I don't understand why I am so na ve some times. It's mind boggling.
Obviously, it didn't happen that way at all, and I spent the best of
the three following days tinkering with the laptop.
Secure boot and EFI
First, I couldn't boot off of the NVMe drive I transferred from the
previous laptop (the Purism) and the
BIOS was not very helpful: it was just complaining about not finding
any boot device, without dropping me in the real BIOS.
At first, I thought it was a problem with my NVMe drive, because it's
not listed in the compatible SSD drives from upstream. But I
figured out how to enter BIOS (press F2 manically, of
course), which showed the NVMe drive was actually detected. It just
didn't boot, because it was an old (2010!!) Debian install without
EFI.
So from there, I disabled secure boot, and booted a grml image to
try to recover. And by "boot" I mean, I managed to get to the grml
boot loader which promptly failed to load its own root file system
somehow. I still have to investigate exactly what happened there, but
it failed some time after the initrd load with:
Unable to find medium containing a live file system
This, it turns out, was fixed in Debian lately, so a daily GRML
build will not have this problems. The upcoming 2022 release
(likely 2022.10 or 2022.11) will also get the fix.
I did manage to boot the development version of the Debian
installer which was a surprisingly good experience: it mounted the
encrypted drives and did everything pretty smoothly. It even offered
me to reinstall the boot loader, but that ultimately (and correctly, as
it turns out) failed because I didn't have a /boot/efi partition.
At this point, I realized there was no easy way out of this, and I
just proceeded to completely reinstall Debian. I had a spare NVMe
drive lying around (backups FTW!) so I just swapped that in, rebooted
in the Debian installer, and did a clean install. I wanted to switch
to bookworm anyways, so I guess that's done too.
Storage limitations
Another thing that happened during setup is that I tried to copy over
the internal 2.5" SSD drive from the Purism to the Framework 1TB
expansion card. There's no 2.5" slot in the new laptop, so that's
pretty much the only option for storage expansion.
I was tired and did something wrong. I ended up wiping the partition
table on the original 2.5" drive.
Oops.
It might be recoverable, but just restoring the partition table
didn't work either, so I'm not sure how I recover the data
there. Normally, everything on my laptops and workstations is designed
to be disposable, so that wasn't that big of a problem. I did manage
to recover most of the data thanks to git-annexreinit, but
that was a little hairy.
Bootstrapping Puppet
Once I had some networking, I had to install all the packages I
needed. The time I spent setting up my workstations with Puppet has
finally paid off. What I actually did was to restore two critical
directories:
/etc/ssh
/var/lib/puppet
So that I would keep the previous machine's identity. That way I could
contact the Puppet server and install whatever was missing. I used my
Puppet optimization
trick to do a batch
install and then I had a good base setup, although not exactly as it
was before. 1700 packages were installed manually on angela before
the reinstall, and not in Puppet.
I did not inspect each one individually, but I did go through /etc
and copied over more SSH keys, for backups and SMTP over SSH.
LVFS support
It looks like there's support for the (de-facto) standard LVFS
firmware update system. At least I was able to update the UEFI
firmware with a simple:
Those instructions come from the beta forum post. I performed the
BIOS update on 2023-01-16T16:00-0500.
Resolution tweaks
The Framework laptop resolution (2256px X 1504px) is big enough to
give you a pretty small font size, so welcome to the marvelous world
of "scaling".
The Debian wiki page has a few tricks for this.
Console
This will make the console and grub fonts more readable:
Xorg
Adding this to your .Xresources will make everything look much bigger:
! 1.5*96
Xft.dpi: 144
Apparently, some of this can also help:
! These might also be useful depending on your monitor and personal preference:
Xft.autohint: 0
Xft.lcdfilter: lcddefault
Xft.hintstyle: hintfull
Xft.hinting: 1
Xft.antialias: 1
Xft.rgba: rgb
It my experience it also makes things look a little fuzzier, which is
frustrating because you have this awesome monitor but everything looks
out of focus. Just bumping Xft.dpi by a 1.5 factor looks good to me.
The Debian Wiki has a page on HiDPI, but it's not as good as the
Arch Wiki, where the above blurb comes from. I am not using the
latter because I suspect it's causing some of the "fuzziness".
TODO: find the equivalent of this GNOME hack in i3? (gsettings set
org.gnome.mutter experimental-features
"['scale-monitor-framebuffer']"), taken from this Framework
guide
Issues
BIOS configuration
The Framework BIOS has some minor issues. One issue I personally
encountered is that I had disabled Quick boot and Quiet boot in
the BIOS to diagnose the above boot issues. This, in turn, triggers a
bug where the BIOS boot manager (F12) would just hang
completely. It would also fail to boot from an external USB drive.
The current fix (as of BIOS 3.03) is to re-enable both Quick
boot and Quiet boot. Presumably this is something that will get
fixed in a future BIOS update.
Note that the following keybindings are active in the BIOS POST
check:
Key
Meaning
F2
Enter BIOS setup menu
F12
Enter BIOS boot manager
Delete
Enter BIOS setup menu
WiFi compatibility issues
I couldn't make WiFi work at first. Obviously, the default Debian
installer doesn't ship with proprietary firmware (although that might
change soon) so the WiFi card didn't work out of the box. But even
after copying the firmware through a USB stick, I couldn't quite
manage to find the right combination of ip/iw/wpa-supplicant
(yes, after repeatedly copying a bunch more packages over to get
those bootstrapped). (Next time I should probably try something like
this post.)
Thankfully, I had a little USB-C dongle with a RJ-45 jack lying
around. That also required a firmware blob, but it was a single
package to copy over, and with that loaded, I had network.
Eventually, I did managed to make WiFi work; the problem was more on
the side of "I forgot how to configure a WPA network by hand from the
commandline" than anything else. NetworkManager worked fine and got
WiFi working correctly.
Note that this is with Debian bookworm, which has the 5.19 Linux
kernel, and with the firmware-nonfree (firmware-iwlwifi,
specifically) package.
Battery life
I was having between about 7 hours of battery on the Purism Librem
13v4, and that's after a year or two of battery life. Now, I still
have about 7 hours of battery life, which is nicer than my old
ThinkPad X220 (20 minutes!) but really, it's not that good for a new
generation laptop. The 12th generation Intel chipset probably improved
things compared to the previous one Framework laptop, but I don't have
a 11th gen Framework to compare with).
(Note that those are estimates from my status bar, not wall clock
measurements. They should still be comparable between the Purism and
Framework, that said.)
The battery life doesn't seem up to, say, Dell XPS 13, ThinkPad X1, and
of course not the Apple M1, where I would expect 10+ hours of battery
life out of the box.
That said, I do get those kind estimates when the machine is fully
charged and idle. In fact, when everything is quiet and nothing is
plugged in, I get dozens of hours of battery life estimated (I've
seen 25h!). So power usage fluctuates quite a bit depending on usage,
which I guess is expected.
Concretely, so far, light web browsing, reading emails and writing
notes in Emacs (e.g. this file) takes about 8W of power:
Expansion cards matter a lot in the battery life (see below for a
thorough discussion), my normal setup is 2xUSB-C and 1xUSB-A (yes,
with an empty slot, and yes, to save power).
Interestingly, playing a video in a (720p) window in a window takes up
more power (10.5W) than in full screen (9.5W) but I blame that on my
desktop setup (i3 + compton)... Not sure if mpv hits the
VA-API, maybe not in windowed mode. Similar results with 1080p,
interestingly, except the window struggles to keep up altogether. Full
screen playback takes a relatively comfortable 9.5W, which means a
solid 5h+ of playback, which is fine by me.
Fooling around the web, small edits, youtube-dl, and I'm at around 80%
battery after about an hour, with an estimated 5h left, which is a
little disappointing. I had a 7h remaining estimate before I started
goofing around Discourse, so I suspect the website is a pretty
big battery drain, actually. I see about 10-12 W, while I was probably at
half that (6-8W) just playing music with mpv in the background...
In other words, it looks like editing posts in Discourse with Firefox
takes a solid 4-6W of power. Amazing and gross.
(When writing about abusive power usage generates more power usage, is
that an heisenbug? Or schr dinbug?)
Power management
Compared to the Purism Librem 13v4, the ongoing power usage seems to
be slightly better. An anecdotal metric is that the Purism would take
800mA idle, while the more powerful Framework manages a little over
500mA as I'm typing this, fluctuating between 450 and 600mA. That is
without any active expansion card, except the storage. Those numbers
come from the output of tlp-stat -b and, unfortunately, the "ampere"
unit makes it quite hard to compare those, because voltage is not
necessarily the same between the two platforms.
TODO: i915 driver has a lot of parameters, including some about
power saving, see, again, the arch wiki, and particularly
enable_fbc=1
TL:DR; power management on the laptop is an issue, but there's various
tweaks you can make to improve it. Try:
powertop --auto-tune
apt install tlp && systemctl enable tlp
nvme.noacpi=1 mem_sleep_default=deep on the kernel command line
may help with standby power usage
keep only USB-C expansion cards plugged in, all others suck power
even when idle
consider upgrading the BIOS to latest beta (3.06 at the time of
writing), unverified power savings
latest Linux kernels (6.2) promise power savings as well
(unverified)
Update: also try to follow the official optimization guide. It
was made for Ubuntu but will probably also work for your distribution
of choice with a few tweaks. They recommend using tlpui but it's
not packaged in Debian. There is, however, a Flatpak
release. In my case, it resulted in the following diff to
tlp.conf: tlp.patch.
Background on CPU architecture
There were power problems in the 11th gen Framework laptop, according
to this report from Linux After Dark, so the issues with power
management on the Framework are not new.
The 12th generation Intel CPU (AKA "Alder Lake") is a big-little
architecture with "power-saving" and "performance" cores. There
used to be performance problems introduced by the scheduler in Linux
5.16 but those were eventually fixed in 5.18, which uses
Intel's hardware as an "intelligent, low-latency hardware-assisted
scheduler". According to Phoronix, the 5.19 release improved the
power saving, at the cost of some penalty cost. There were also patch
series to make the scheduler configurable, but it doesn't look
those have been merged as of 5.19. There was also a session about this
at the 2022 Linux Plumbers, but they stopped short of
talking more about the specific problems Linux is facing in Alder
lake:
Specifically, the kernel's energy-aware scheduling heuristics don't
work well on those CPUs. A number of features present there
complicate the energy picture; these include SMT, Intel's "turbo
boost" mode, and the CPU's internal power-management mechanisms. For
many workloads, running on an ostensibly more power-hungry Pcore can
be more efficient than using an Ecore. Time for discussion of the
problem was lacking, though, and the session came to a close.
All this to say that the 12gen Intel line shipped with this Framework
series should have better power management thanks to its
power-saving cores. And Linux has had the scheduler changes to make
use of this (but maybe is still having trouble). In any case, this
might not be the source of power management problems on my laptop,
quite the opposite.
Also note that the firmware updates for various chipsets are
supposed to improve things eventually.
On the other hand, The Verge simply declared the whole P-series
a mistake...
Attempts at improving power usage
I did try to follow some of the tips in this forum post. The
tricks powertop --auto-tune and tlp's
PCIE_ASPM_ON_BAT=powersupersave basically did nothing: I was stuck
at 10W power usage in powertop (600+mA in tlp-stat).
Apparently, I should be able to reach the C8 CPU power state (or
even C9, C10) in powertop, but I seem to be stock at
C7. (Although I'm not sure how to read that tab in powertop: in the
Core(HW) column there's only C3/C6/C7 states, and most cores are 85%
in C7 or maybe C6. But the next column over does show many CPUs in
C10 states...
As it turns out, the graphics card actually takes up a good chunk of
power unless proper power management is enabled (see below). After
tweaking this, I did manage to get down to around 7W power usage in
powertop.
Expansion cards actually do take up power, and so does the screen,
obviously. The fully-lit screen takes a solid 2-3W of power compared
to the fully dimmed screen. When removing all expansion cards and
making the laptop idle, I can spin it down to 4 watts power usage at
the moment, and an amazing 2 watts when the screen turned off.
Caveats
Abusive (10W+) power usage that I initially found could be a problem
with my desktop configuration: I have this silly status bar that
updates every second and probably causes redraws... The CPU certainly
doesn't seem to spin down below 1GHz. Also note that this is with an
actual desktop running with everything: it could very well be that
some things (I'm looking at you Signal Desktop) take up unreasonable
amount of power on their own (hello, 1W/electron, sheesh). Syncthing
and containerd (Docker!) also seem to take a good 500mW just sitting
there.
Beyond my desktop configuration, this could, of course, be a
Debian-specific problem; your favorite distribution might be better at
power management.
Idle power usage tests
Some expansion cards waste energy, even when unused. Here is a summary
of the findings from the powerstat page. I also include other
devices tested in this page for completeness:
Device
Minimum
Average
Max
Stdev
Note
Screen, 100%
2.4W
2.6W
2.8W
N/A
Screen, 1%
30mW
140mW
250mW
N/A
Backlight 1
290mW
?
?
?
fairly small, all things considered
Backlight 2
890mW
1.2W
3W?
460mW?
geometric progression
Backlight 3
1.69W
1.5W
1.8W?
390mW?
significant power use
Radios
100mW
250mW
N/A
N/A
USB-C
N/A
N/A
N/A
N/A
negligible power drain
USB-A
10mW
10mW
?
10mW
almost negligible
DisplayPort
300mW
390mW
600mW
N/A
not passive
HDMI
380mW
440mW
1W?
20mW
not passive
1TB SSD
1.65W
1.79W
2W
12mW
significant, probably higher when busy
MicroSD
1.6W
3W
6W
1.93W
highest power usage, possibly even higher when busy
Ethernet
1.69W
1.64W
1.76W
N/A
comparable to the SSD card
So it looks like all expansion cards but the USB-C ones are active,
i.e. they draw power with idle. The USB-A cards are the least concern,
sucking out 10mW, pretty much within the margin of error. But both the
DisplayPort and HDMI do take a few hundred miliwatts. It looks like
USB-A connectors have this fundamental flaw that they necessarily draw
some powers because they lack the power negotiation features of
USB-C. At least according to this post:
It seems the USB A must have power going to it all the time, that
the old USB 2 and 3 protocols, the USB C only provides power when
there is a connection. Old versus new.
Apparently, this is a problem specific to the USB-C to USB-A
adapter that ships with the Framework. Some people have actually
changed their orders to all USB-C because of this problem, but I'm
not sure the problem is as serious as claimed in the forums. I
couldn't reproduce the "one watt" power drains suggested elsewhere,
at least not repeatedly. (A previous version of this post did show
such a power drain, but it was in a less controlled test
environment than the series of more rigorous tests above.)
The worst offenders are the storage cards: the SSD drive takes at
least one watt of power and the MicroSD card seems to want to take all
the way up to 6 watts of power, both just sitting there doing
nothing. This confirms claims of 1.4W for the SSD (but not
5W) power usage found elsewhere. The former post has
instructions on how to disable the card in software. The MicroSD card
has been reported as using 2 watts, but I've seen it as high as 6
watts, which is pretty damning.
The Framework team has a beta update for the DisplayPort adapter
but currently only for Windows (LVFS technically possible, "under
investigation"). A USB-A firmware update is alsounder
investigation. It is therefore likely at least some of those power
management issues will eventually be fixed.
Note that the upcoming Ethernet card has a reported 2-8W power usage,
depending on traffic. I did my own power usage tests in
powerstat-wayland and they seem lower than 2W.
The upcoming 6.2 Linux kernel might also improve battery usage when
idle, see this Phoronix article for details, likely in early
2023.
Idle power usage tests under Wayland
Update: I redid those tests under Wayland, see powerstat-wayland
for details. The TL;DR: is that power consumption is either smaller or
similar.
Idle power usage tests, 3.06 beta BIOS
I redid the idle tests after the 3.06 beta BIOS update and ended
up with this results:
Device
Minimum
Average
Max
Stdev
Note
Baseline
1.96W
2.01W
2.11W
30mW
1 USB-C, screen off, backlight off, no radios
2 USB-C
1.95W
2.16W
3.69W
430mW
USB-C confirmed as mostly passive...
3 USB-C
1.95W
2.16W
3.69W
430mW
... although with extra stdev
1TB SSD
3.72W
3.85W
4.62W
200mW
unchanged from before upgrade
1 USB-A
1.97W
2.18W
4.02W
530mW
unchanged
2 USB-A
1.97W
2.00W
2.08W
30mW
unchanged
3 USB-A
1.94W
1.99W
2.03W
20mW
unchanged
MicroSD w/o card
3.54W
3.58W
3.71W
40mW
significant improvement! 2-3W power saving!
MicroSD w/ card
3.53W
3.72W
5.23W
370mW
new measurement! increased deviation
DisplayPort
2.28W
2.31W
2.37W
20mW
unchanged
1 HDMI
2.43W
2.69W
4.53W
460mW
unchanged
2 HDMI
2.53W
2.59W
2.67W
30mW
unchanged
External USB
3.85W
3.89W
3.94W
30mW
new result
Ethernet
3.60W
3.70W
4.91W
230mW
unchanged
Note that the table summary is different than the previous table: here
we show the absolute numbers while the previous table was doing a
confusing attempt at showing relative (to the baseline) numbers.
Conclusion: the 3.06 BIOS update did not significantly change idle
power usage stats except for the MicroSD card which has
significantly improved.
The new "external USB" test is also interesting: it shows how the
provided 1TB SSD card performs (admirably) compared to existing
devices. The other new result is the MicroSD card with a card which,
interestingly, uses less power than the 1TB SSD drive.
That's 8mAh per 10 minutes (and 2 seconds), or 48mA, or, with this
battery, about 127 hours or roughly 5 days of standby. Not bad!
In comparison, here is my really old x220, before:
sep 29 22:13:54 emma systemd-sleep[176315]: /sys/class/power_supply/BAT0/energy_now = 5070 [mWh]
... after:
sep 29 22:23:54 emma systemd-sleep[176486]: /sys/class/power_supply/BAT0/energy_now = 4980 [mWh]
... which is 90 mwH in 10 minutes, or a whopping 540mA, which was
possibly okay when this battery was new (62000 mAh, so about 100
hours, or about 5 days), but this battery is almost dead and has
only 5210 mAh when full, so only 10 hours standby.
And here is the Framework performing a similar test, before:
... which is 49mAh in a little over 10 minutes (and 4 seconds), or
292mA, much more than the Purism, but half of the X220. At this rate,
the battery would last on standby only 12 hours!! That is pretty
bad.
Note that this was done with the following expansion cards:
2 USB-C
1 1TB SSD drive
1 USB-A with a hub connected to it, with keyboard and LAN
Preliminary tests without the hub (over one minute) show that it
doesn't significantly affect this power consumption (300mA).
This guide also suggests booting with nvme.noacpi=1 but this
still gives me about 5mAh/min (or 300mA).
Adding mem_sleep_default=deep to the kernel command line does make a
difference. Before:
... which is 2mAh in 74 seconds, which is 97mA, brings us to a more
reasonable 36 hours, or a day and a half. It's still above the x220
power usage, and more than an order of magnitude more than the Purism
laptop. It's also far from the 0.4% promised by upstream, which
would be 14mA for the 3500mAh battery.
It should also be noted that this "deep" sleep mode is a little more
disruptive than regular sleep. As you can see by the timing, it took
more than 10 seconds for the laptop to resume, which feels a little
alarming as your banging the keyboard to bring it back to life.
You can confirm the current sleep mode with:
# cat /sys/power/mem_sleep
s2idle [deep]
In the above, deep is selected. You can change it on the fly with:
... better! 6 mAh in about 6 minutes, works out to 63.5mA, so more
than two days standby.
A longer test:
oct 01 09:22:56 angela systemd-sleep[62978]: /sys/class/power_supply/BAT1/charge_now = 3327 [mAh]
oct 01 12:47:35 angela systemd-sleep[63219]: /sys/class/power_supply/BAT1/charge_now = 3147 [mAh]
That's 180mAh in about 3.5h, 52mA! Now at 66h, or almost 3 days.
I wasn't sure why I was seeing such fluctuations in those tests, but
as it turns out, expansion card power tests show that they do
significantly affect power usage, especially the SSD drive, which can
take up to two full watts of power even when idle. I didn't control
for expansion cards in the above tests running them with whatever
card I had plugged in without paying attention so it's likely the
cause of the high power usage and fluctuations.
It might be possible to work around this problem by disabling USB
devices before suspend. TODO. See also this post.
In the meantime, I have been able to get much better suspend
performance by unplugging all modules. Then I get this result:
oct 04 11:15:38 angela systemd-sleep[257571]: /sys/class/power_supply/BAT1/charge_now = 3203 [mAh]
oct 04 15:09:32 angela systemd-sleep[257866]: /sys/class/power_supply/BAT1/charge_now = 3145 [mAh]
Which is 14.8mA! Almost exactly the number promised by Framework! With
a full battery, that means a 10 days suspend time. This is actually
pretty good, and far beyond what I was expecting when starting down
this journey.
So, once the expansion cards are unplugged, suspend power usage is
actually quite reasonable. More detailed standby tests are available
in the standby-tests page, with a summary below.
There is also some hope that the Chromebook edition
specifically designed with a specification of 14 days standby
time could bring some firmware improvements back down to the
normal line. Some of those issues were reported upstream in April
2022, but there doesn't seem to have been any progress there
since.
TODO: one final solution here is suspend-then-hibernate, which
Windows uses for this
TODO: consider implementing the S0ix sleep states , see also troubleshooting
TODO: consider https://github.com/intel/pm-graph
Standby expansion cards test results
This table is a summary of the more extensive standby-tests I have performed:
Device
Wattage
Amperage
Days
Note
baseline
0.25W
16mA
9
sleep=deep nvme.noacpi=1
s2idle
0.29W
18.9mA
~7
sleep=s2idle nvme.noacpi=1
normal nvme
0.31W
20mA
~7
sleep=s2idle without nvme.noacpi=1
1 USB-C
0.23W
15mA
~10
2 USB-C
0.23W
14.9mA
same as above
1 USB-A
0.75W
48.7mA
3
+500mW (!!) for the first USB-A card!
2 USB-A
1.11W
72mA
2
+360mW
3 USB-A
1.48W
96mA
<2
+370mW
1TB SSD
0.49W
32mA
<5
+260mW
MicroSD
0.52W
34mA
~4
+290mW
DisplayPort
0.85W
55mA
<3
+620mW (!!)
1 HDMI
0.58W
38mA
~4
+250mW
2 HDMI
0.65W
42mA
<4
+70mW
Conclusions:
USB-C cards take no extra power on suspend, possibly less
than empty slots, more testing required
USB-A cards take a lot more power on suspend
(300-500mW) than on regular idle (~10mW, almost negligible)
1TB SSD and MicroSD cards seem to take a reasonable
amount of power (260-290mW), compared to their runtime
equivalents (1-6W!)
DisplayPort takes a surprising lot of power (620mW), almost
double its average runtime usage (390mW)
HDMI cards take, surprisingly, less power (250mW) in
standby than the DP card (620mW)
and oddly, a second card adds less power usage (70mW?!) than the
first, maybe a circuit is used by both?
Standby expansion cards test results, 3.06 beta BIOS
Framework recently (2022-11-07) announced that they will publish
a firmware upgrade to address some of the USB-C issues, including
power management. This could positively affect the above result,
improving both standby and runtime power usage.
The update came out in December 2022 and I redid my analysis with
the following results:
Device
Wattage
Amperage
Days
Note
baseline
0.25W
16mA
9
no cards, same as before upgrade
1 USB-C
0.25W
16mA
9
same as before
2 USB-C
0.25W
16mA
9
same
1 USB-A
0.80W
62mA
3
+550mW!! worse than before
2 USB-A
1.12W
73mA
<2
+320mW, on top of the above, bad!
Ethernet
0.62W
40mA
3-4
new result, decent
1TB SSD
0.52W
34mA
4
a bit worse than before (+2mA)
MicroSD
0.51W
22mA
4
same
DisplayPort
0.52W
34mA
4+
upgrade improved by 300mW
1 HDMI
?
38mA
?
same
2 HDMI
?
45mA
?
a bit worse than before (+3mA)
Normal
1.08W
70mA
~2
Ethernet, 2 USB-C, USB-A
Full results in standby-tests-306. The big takeaway for me is that
the update did not improve power usage on the USB-A ports which is a
big problem for my use case. There is a notable improvement on the
DisplayPort power consumption which brings it more in line with the
HDMI connector, but it still doesn't properly turn off on suspend
either.
Even worse, the USB-A ports now sometimes fails to resume after
suspend, which is pretty annoying. This is a known problem
that will hopefully get fixed in the final release.
I looked at building this myself but failed to run it. I opened a
RFP in Debian so that we can ship this in Debian, and also documented
my work there.
Note that there is now a counter that tracks charge/discharge
cycles. It's visible in tlp-stat -b, which is a nice
improvement:
Ethernet expansion card
The Framework ethernet expansion card is a fancy little doodle:
"2.5Gbit/s and 10/100/1000Mbit/s Ethernet", the "clear housing lets
you peek at the RTL8156 controller that powers it". Which is another
way to say "we didn't completely finish prod on this one, so it kind
of looks like we 3D-printed this in the shop"....
The card is a little bulky, but I guess that's inevitable considering
the RJ-45 form factor when compared to the thin Framework laptop.
I have had a serious issue when trying it at first: the link LEDs
just wouldn't come up. I made a full bug report in the forum and
with upstream support, but eventually figured it out on my own. It's
(of course) a power saving issue: if you reboot the machine, the links
come up when the laptop is running the BIOS POST check and even when
the Linux kernel boots.
I first thought that the problem is likely related to the
powertop service which I run at boot time to tweak some power saving
settings.
It seems like this:
By default, USB power saving is active in the kernel, but not
force-enabled for incompatible drivers. That is, devices that
support suspension will suspend, drivers that do not, will not.
So the fix is actually to uninstall tlp or disable that setting by
adding this to /etc/tlp.conf:
USB_AUTOSUSPEND=0
... but that disables auto-suspend on all USB devices, which may
hurt other power usage performance. I have found that a a
combination of:
USB_AUTOSUSPEND=1
USB_DENYLIST="0bda:8156"
and this on the kernel commandline:
usbcore.quirks=0bda:8156:k
... actually does work correctly. I now have this in my
/etc/default/grub.d/framework-tweaks.cfg file:
# net.ifnames=0: normal interface names ffs (e.g. eth0, wlan0, not wlp166
s0)
# nvme.noacpi=1: reduce SSD disk power usage (not working)
# mem_sleep_default=deep: reduce power usage during sleep (not working)
# usbcore.quirk is a workaround for the ethernet card suspend bug: https:
//guides.frame.work/Guide/Fedora+37+Installation+on+the+Framework+Laptop/
108?lang=en
GRUB_CMDLINE_LINUX="net.ifnames=0 nvme.noacpi=1 mem_sleep_default=deep usbcore.quirks=0bda:8156:k"
# fix the resolution in grub for fonts to not be tiny
GRUB_GFXMODE=1024x768
Other than that, I haven't been able to max out the card because I
don't have other 2.5Gbit/s equipment at home, which is strangely
satisfying. But running against my Turris Omnia
router, I could pretty much max a gigabit fairly easily:
The card doesn't require any proprietary firmware blobs which is
surprising. Other than the power saving issues, it just works.
In my power tests (see powerstat-wayland), the Ethernet card seems
to use about 1.6W of power idle, without link, in the above "quirky"
configuration where the card is functional but without autosuspend.
Proprietary firmware blobs
The framework does need proprietary firmware to operate. Specifically:
the WiFi network card shipped with the DIY kit is a AX210 card that
requires a 5.19 kernel or later, and the firmware-iwlwifi non-free firmware package
the Bluetooth adapter also loads the firmware-iwlwifi
package (untested)
the graphics work out of the box without firmware, but certain
power management features come only with special proprietary
firmware, normally shipped in the firmware-misc-nonfree
but currently missing from the package
Note that, at the time of writing, the latest i915 firmware from
linux-firmware has a serious bug where loading all the
accessible firmware results in noticeable I estimate 200-500ms lag
between the keyboard (not the mouse!) and the display. Symptoms also
include tearing and shearing of windows, it's pretty nasty.
One workaround is to delete the two affected firmware files:
cd /lib/firmware && rm adlp_guc_70.1.1.bin adlp_guc_69.0.3.bin
update-initramfs -u
You will get the following warning during build, which is good as
it means the problematic firmware is disabled:
W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915
W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915
But then it also means that critical firmware isn't loaded, which
means, among other things, a higher battery drain. I was able to move
from 8.5-10W down to the 7W range after making the firmware work
properly. This is also after turning the backlight all the way down,
as that takes a solid 2-3W in full blast.
The proper fix is to use some compositing manager. I ended up using
compton with the following systemd unit:
compton is orphaned however, so you might be tempted to use
picom instead, but in my experience the latter uses much
more power (1-2W extra, similar experience). I also tried
compiz but it would just crash with:
anarcat@angela:~$ compiz --replace
compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Another composite manager is already running on screen: 0
compiz (core) - Fatal: No manageable screens found on display :0
When running from the base session, I would get this instead:
Also note that the iwlwifi firmware also looks incomplete. Even with
the package installed, I get those errors in dmesg:
[ 19.534429] Intel(R) Wireless WiFi driver for Linux
[ 19.534691] iwlwifi 0000:a6:00.0: enabling device (0000 -> 0002)
[ 19.541867] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[ 19.541881] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[ 19.541882] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-72.ucode failed with error -2
[ 19.541890] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[ 19.541895] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[ 19.541896] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-71.ucode failed with error -2
[ 19.541903] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[ 19.541907] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[ 19.541908] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-70.ucode failed with error -2
[ 19.541913] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[ 19.541916] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[ 19.541917] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-69.ucode failed with error -2
[ 19.541922] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[ 19.541926] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[ 19.541927] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-68.ucode failed with error -2
[ 19.541933] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[ 19.541937] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[ 19.541937] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-67.ucode failed with error -2
[ 19.544244] iwlwifi 0000:a6:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-66.ucode
[ 19.544257] iwlwifi 0000:a6:00.0: api flags index 2 larger than supported by driver
[ 19.544270] iwlwifi 0000:a6:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.63.2.1
[ 19.544523] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[ 19.544528] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[ 19.544530] iwlwifi 0000:a6:00.0: loaded firmware version 66.55c64978.0 ty-a0-gf-a0-66.ucode op_mode iwlmvm
Some of those are available in the latest upstream firmware package
(iwlwifi-ty-a0-gf-a0-71.ucode, -68, and -67), but not all
(e.g. iwlwifi-ty-a0-gf-a0-72.ucode is missing) . It's unclear what
those do or don't, as the WiFi seems to work well without them.
I still copied them in from the latest linux-firmware package in the
hope they would help with power management, but I did not notice a
change after loading them.
There are also multiple knobs on the iwlwifi and iwlmvm
drivers. The latter has a power_schmeme setting which defaults to
2 (balanced), setting it to 3 (low power) could improve
battery usage as well, in theory. The iwlwifi driver also has
power_save (defaults to disabled) and power_level (1-5, defaults
to 1) settings. See also the output of modinfo iwlwifi and
modinfo iwlmvm for other driver options.
Graphics acceleration
After loading the latest upstream firmware and setting up a
compositing manager (compton, above), I tested the classic
glxgears.
Running in a window gives me odd results, as the gears basically grind
to a halt:
Running synchronized to the vertical refresh. The framerate should be
approximately the same as the monitor refresh rate.
137 frames in 5.1 seconds = 26.984 FPS
27 frames in 5.4 seconds = 5.022 FPS
Ouch. 5FPS!
But interestingly, once the window is in full screen, it does hit the
monitor refresh rate:
300 frames in 5.0 seconds = 60.000 FPS
I'm not really a gamer and I'm not normally using any of that fancy
graphics acceleration stuff (except maybe my browser does?).
I installed intel-gpu-tools for the intel_gpu_top
command to confirm the GPU was engaged when doing those simulations. A
nice find. Other useful diagnostic tools include glxgears and
glxinfo (in mesa-utils) and (vainfo in vainfo).
Following to this post, I also made sure to have those settings
in my about:config in Firefox, or, in user.js:
user_pref("media.ffmpeg.vaapi.enabled", true);
Note that the guide suggests many other settings to tweak, but those
might actually be overkill, see this comment and its parents. I
did try forcing hardware acceleration by setting gfx.webrender.all
to true, but everything became choppy and weird.
The guide also mentions installing the intel-media-driver package,
but I could not find that in Debian.
The Arch wiki has, as usual, an excellent reference on hardware
acceleration in Firefox.
Chromium / Signal desktop bugs
It looks like both Chromium and Signal Desktop misbehave with my
compositor setup (compton + i3). The fix is to add a persistent
flag to Chromium. In Arch, it's conveniently in
~/.config/chromium-flags.conf but that doesn't actually work in
Debian. I had to put the flag in
/etc/chromium.d/disable-compositing, like this:
It's possible another one of the hundreds of flags might fix this
issue better, but I don't really have time to go through this entire,
incomplete, and unofficial list (!?!).
Signal Desktop is a similar problem, and doesn't reuse those flags
(because of course it doesn't). Instead I had to rewrite the wrapper
script in /usr/local/bin/signal-desktop to use this instead:
exec /usr/bin/flatpak run --branch=stable --arch=x86_64 org.signal.Signal --disable-gpu-compositing "$@"
This was mostly done in this Puppet commit.
I haven't figured out the root of this problem. I did try using
picom and xcompmgr; they both suffer from the same issue. Another
Debian testing user on Wayland told me they haven't seen this problem,
so hopefully this can be fixed by switching to
wayland.
Graphics card hangs
I believe I might have this bug which results in a total
graphical hang for 15-30 seconds. It's fairly rare so it's not too
disruptive, but when it does happen, it's pretty alarming.
The comments on that bug report are encouraging though: it seems this
is a bug in either mesa or the Intel graphics driver, which means many
people have this problem so it's likely to be fixed. There's actually
a merge request on mesa already (2022-12-29).
It could also be that bug because the error message I get is
actually:
Jan 20 12:49:10 angela kernel: Asynchronous wait on fence 0000:00:02.0:sway[104431]:cb0ae timed out (hint:intel_atomic_commit_ready [i915])
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on rcs0
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC authenticated
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC submission enabled
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC SLPC enabled
It's a solid 30 seconds graphical hang. Maybe the keyboard and
everything else keeps working. The latter bug report is quite long,
with many comments, but this one from January 2023 seems to say
that Sway 1.8 fixed the problem. There's also an earlier patch to
add an extra kernel parameter that supposedly fixes that too. There's
all sorts of other workarounds in there, for example this:
from this comment... So that one is unsolved, as far as the
upstream drivers are concerned, but maybe could be fixed through Sway.
Weird USB hangs / graphical glitches
I have had weird connectivity glitches better described in this
post, but basically: my USB keyboard and mice (connected over a
USB hub) drop keys, lag a lot or hang, and I get visual glitches.
The fix was to tighten the screws around the CPU on the motherboard
(!), which is, thankfully, a rather simple repair.
USB docks are hell
Note that the monitors are hooked up to angela through a USB-C /
Thunderbolt dock from Cable Matters, with the lovely name of
201053-SIL. It has issues, see this blog
post for an in-depth discussion.
Shipping details
I ordered the Framework in August 2022 and received it about a month
later, which is sooner than expected because the August batch was
late.
People (including me) expected this to have an impact on the September
batch, but it seems Framework have been able to fix the delivery
problems and keep up with the demand.
As of early 2023, their website announces that laptops ship "within 5
days". I have myself ordered a few expansion cards in November 2022,
and they shipped on the same day, arriving 3-4 days later.
The supply pipeline
There are basically 6 steps in the Framework shipping pipeline, each
(except the last) accompanied with an email notification:
pre-order
preparing batch
preparing order
payment complete
shipping
(received)
This comes from the crowdsourced spreadsheet, which should be
updated when the status changes here.
I was part of the "third batch" of the 12th generation laptop, which
was supposed to ship in September. It ended up arriving on my door
step on September 27th, about 33 days after ordering.
It seems current orders are not processed in "batches", but in real
time, see this blog post for details on shipping.
Shipping trivia
I don't know about the others, but my laptop shipped through no less
than four different airplane flights. Here are the hops it took:
I can't quite figure out how to calculate exactly how much mileage
that is, but it's huge. The ride through Alaska is surprising enough
but the bounce back through Winnipeg is especially weird. I guess
the route happens that way because of Fedex shipping hubs.
There was a related oddity when I had my Purism laptop shipped: it
left from the west coast and seemed to enter on an endless, two week
long road trip across the continental US.
I just got a E5-2696 v3 CPU for my ML110 Gen9 home workstation, this has a Passmark score of 23326 which is almost 3 times faster than the E5-2620 v4 which rated 9224. Previously it took over 40 minutes real time to compile a 6.10 kernel that was based on the Debian kernel configuration, now it takes 14 minutes of real time, 202 minutes of user time, and 37 minutes of system CPU time. That s a definite benefit of having a faster CPU, I don t often compile kernels but when I do I don t want to wait 40+ minutes for a result. I also expanded the system from 96G of RAM to 128G, most of the time I don t need so much RAM but it s better to have too much than too little, particularly as my friend got me a good deal on RAM. The extra RAM might have helped improve performance too, going from 6/8 DIMM slots full to 8/8 might help the CPU balance access.
That series of HP machines has a plastic mounting bracket for the CPU, see this video about the HP Proliant Smart Socket for details [1]. I was working on this with a friend who has the same model of HP server as I do, after buying myself a system I was so happy with it that I bought another the same when I saw it going for a good price and then sold it to my friend when I realised that I had too many tower servers at home. It turns out that getting the same model of computer as a friend is a really good strategy so then you can work together to solve problems with it. My friend s first idea was to try and buy new clips for the new CPUs (which would have delayed things and cost more money), but Reddit and some blog posts suggested that you can just skip the smart-socket guide clip and when the chip was resting in the socket it felt secure as the protrusions on the sides of the socket fit firmly enough into the notches in the CPU to prevent it moving far enough to short a connection. Testing on 2 systems showed that you don t need the clip. As an aside it would be nice if Intel made every CPU that fits a particular socket have the same physical dimensions so clips and heatsinks can work well on all CPUs.
The TDP of the new CPU is 145W and the old one was 85W. One would hope that in a server class system that wouldn t make a lot of difference but unfortunately the difference was significant. Previously I could have the system running 7/8 cores with BOINC 24*7 and I wouldn t notice the fans being louder. It is possible that 100% CPU use on a hot day might make the fans sound louder if I didn t have an air-conditioner on that was loud enough to drown them out, but the noteworthy fact is that with the previous CPU the system fans were a minor annoyance. Now if I have 16 cores running BOINC it s quite loud, the sort of noise that makes most people avoid using tower servers as workstations! I ve found that if I limit it to 4 or 5 cores then the system is about as quiet as it was before. As a rough approximation I can use as much CPU power as before without making the fans louder but if I use more CPU power than was previously available it gets noisy.
I also got some new NVMe devices, I was previously using 2*Crucial 1TB P1 NVMes in a BTRFS RAID-1 and now I have 2*Crucial 1TB P3 NVMes (where P1 is the slowest Crucial offering, P3 is better and more expensive, P5 is even better, etc). When doing the BTRFS migrations to move my workstation to new NVMe devices and my server to the old NVMe devices I found that the P3 series seem to have a limit of about 70MB/s for sustained random writes and the P1 series is about 35MB/s. Apparently with the cheaper NVMe devices they slow down if you do lots of random writes, pity that all the review articles talking about GB/s speeds don t mention this. To see how bad reviews are Google some reviews of these SSDs, you will find a couple of comment threads on places like Reddit of them slowing down with lots of writes and lots of review articles on well known sites that don t mention it. Generally I d recommend not upgrading from P1 to P3 NVMe devices, the benefit isn t enough to cover the effort. For every capacity of NVMe devices the most expensive devices cost more than twice as much as the cheapest devices, and sometimes it will be worth the money. Getting the most expensive device won t guarantee great performance but getting cheap devices will guarantee that it s slow.
It seems that CPU development isn t progressing as well as it used to, the CPU I just bought was released in 2015 and scored 23,343 according to Passmark [2]. The most expensive Intel CPU on offer at my local computer store is the i9-13900K which was released this year and scores 62,914 [3]. One might say that CPUs designed for servers are different from ones designed for desktop PCs, but the i9 in question has a TDP Up of 253W which is too big for the PSU I have! According to the HP web site the new ML110 Gen10 servers aren t sold with a CPU as fast as the E5-2696 v3! In the period from 1988 to about 2015 every year there were new CPUs with new capabilities that were worth an upgrade. Now for the last 8 years or so there hasn t been much improvement at all. Buy a new PC for better USB ports or something not for a faster CPU!
There's a bunch of ways you can store cryptographic keys. The most obvious is to just stick them on disk, but that has the downside that anyone with access to the system could just steal them and do whatever they wanted with them. At the far end of the scale you have Hardware Security Modules (HSMs), hardware devices that are specially designed to self destruct if you try to take them apart and extract the keys, and which will generate an audit trail of every key operation. In between you have things like smartcards, TPMs, Yubikeys, and other platform secure enclaves - devices that don't allow arbitrary access to keys, but which don't offer the same level of assurance as an actual HSM (and are, as a result, orders of magnitude cheaper).
The problem with all of these hardware approaches is that they have entirely different communication mechanisms. The industry realised this wasn't ideal, and in 1994 RSA released version 1 of the PKCS#11 specification. This defines a C interface with a single entry point - C_GetFunctionList. Applications call this and are given a structure containing function pointers, with each entry corresponding to a PKCS#11 function. The application can then simply call the appropriate function pointer to trigger the desired functionality, such as "Tell me how many keys you have" and "Sign this, please". This is both an example of C not just being a programming language and also of you having to shove a bunch of vendor-supplied code into your security critical tooling, but what could possibly go wrong.
(Linux distros work around this problem by using p11-kit, which is a daemon that speaks d-bus and loads PKCS#11 modules for you. You can either speak to it directly over d-bus, or for apps that only speak PKCS#11 you can load a module that just transports the PKCS#11 commands over d-bus. This moves the weird vendor C code out of process, and also means you can deal with these modules without having to speak the C ABI, so everyone wins)
One of my work tasks at the moment is helping secure SSH keys, ensuring that they're only issued to appropriate machines and can't be stolen afterwards. For Windows and Linux machines we can stick them in the TPM, but Macs don't have a TPM as such. Instead, there's the Secure Enclave - part of the T2 security chip on x86 Macs, and directly integrated into the M-series SoCs. It doesn't have anywhere near as many features as a TPM, let alone an HSM, but it can generate NIST curve elliptic curve keys and sign things with them and that's good enough. Things are made more complicated by Apple only allowing keys to be used by the app that generated them, so it's hard for applications to generate keys on behalf of each other. This can be mitigated by using CryptoTokenKit, an interface that allows apps to present tokens to the systemwide keychain. Although this is intended for allowing a generic interface for access to such tokens (kind of like PKCS#11), an app can generate its own keys in the Secure Enclave and then expose them to other apps via the keychain through CryptoTokenKit.
Of course, applications then need to know how to communicate with the keychain. Browsers mostly do so, and Apple's version of SSH can to an extent. Unfortunately, that extent is "Retrieve passwords to unlock on-disk keys", which doesn't help in our case. PKCS#11 comes to the rescue here! Apple ship a module called ssh-keychain.dylib, a PKCS#11 module that's intended to allow SSH to use keys that are present in the system keychain. Unfortunately it's not super well maintained - it got broken when Big Sur moved all the system libraries into a cache, but got fixed up a few releases later. Unfortunately every time I tested it with our CryptoTokenKit provider (and also when I retried with SecureEnclaveToken to make sure it wasn't just our code being broken), ssh would tell me "provider /usr/lib/ssh-keychain.dylib returned no slots" which is not especially helpful. Finally I realised that it was actually generating more debug output, but it was being sent to the system debug logs rather than the ssh debug output. Well, when I say "more debug output", I mean "Certificate []: algorithm is not supported, ignoring it", which still doesn't tell me all that much. So I stuck it in Ghidra and searched for that string, and the line above it was
with it immediately failing if the key isn't RSA. Which it isn't, since the Secure Enclave doesn't support RSA. Apple's PKCS#11 module appears incapable of making use of keys generated on Apple's hardware.
There's a couple of ways of dealing with this. The first, which is taken by projects like Secretive, is to implement the SSH agent protocol and have SSH delegate key management to that agent, which can then speak to the keychain. But if you want this to work in all cases you need to implement all the functionality in the existing ssh-agent, and that seems like a bunch of work. The second is to implement a PKCS#11 module, which sounds like less work but probably more mental anguish. I'll figure that out tomorrow.
Holger Levsen: Welcome, David, thanks for taking the time to talk with us today. First, could you briefly tell me about yourself?
David: Sure! I m David A. Wheeler and
I work for the Linux Foundation as the Director of Open Source Supply Chain Security.
That just means that my job is to help open source software projects
improve their security, including its development, build, distribution,
and incorporation in larger works, all the way out to its eventual use by end-users.
In my copious free time I also teach at George Mason University (GMU); in particular,
I teach a graduate course on how to design and implement secure software.
My background is technical. I have a Bachelor s in Electronics Engineering,
a Master s in Computer Science and a PhD in Information Technology.
My PhD dissertation is connected to reproducible builds.
My PhD dissertation was on countering the Trusting Trust attack, an attack
that subverts fundamental build system tools such as compilers.
The attack was discovered by Karger & Schell in the 1970s, and later
demonstrated & popularized by Ken Thompson.
In my dissertation on trusting trust I showed that a process
called Diverse Double-Compiling (DDC) could detect trusting trust attacks.
That process is a specialized kind of reproducible build specifically designed
to detect trusting trust style attacks. In addition, countering the trusting trust
attack primarily becomes more important only when reproducible builds become
more common. Reproducible builds enable detection of
build-time subversions.
Most attackers wouldn t bother with a trusting trust attack if they could just
directly use a build-time subversion of the software they actually want to subvert.
Holger: Thanks for taking the time to introduce yourself to us. What do you think are the biggest challenges today in computing?
There are many big challenges in computing today. For example:
Lack of resilience & capacity in chip fabrication. Fabs are extraordinarily expensive,
and at the high end continue to have technological advancement.
As a result, supply is failing to meet demand, and geopolitical issues raise further concerns.
We ve seen cars, gaming consoles and many other devices
unable to be delivered due to chip shortages. More fabs are
being built, and some politicians are raising concerns, but it s unclear
that current efforts will be enough.
Lack of enough developers able to develop the software that people & organizations need.
Computers are far faster, and open source software has made software reuse
incredibly easy. However, organizations still struggle to automate
many tasks. The bottleneck is the lack of enough talented developers able to convert
ideas into working software. Low-code and no-code approaches help in specialized areas,
just like all previous automate the programmer efforts of the last 60 years, but
there s no reason to believe they will help enough.
Large scale of software. Small systems are easier to develop & maintain, but today s
systems increasingly get bigger to meet users needs & are much harder to manage.
Even small embedded systems are often supported by huge back-end systems.
Ending tail of Moore s law & rise of smartphones. Historically people would just wait a few years for their
software to speed up, but Moore s law is petering out, and smartphones are necessarily
limited by power & size limits. As a result, software developers
can t wait for the hardware to save their slow systems; they must redesign.
Switching to faster languages, or using multiple processors, is much more difficult than
waiting for performance problems to disappear.
Continuous change in interfaces. Developers continuously find reasons to change
component interfaces: perhaps they re too inflexible, too hard to use, and so on.
But now that developers are reusing hundreds, thousands, or tens of thousands of components,
managing the continuous change of the reused components is challenging.
Package managers make updating easy but don t automatically handle interface changes.
I think this is mostly a self-inflicted problem most components could support old interfaces
(like the Linux kernel does) but because it s often not acknowledged as a problem, it s often not addressed.
Security & privacy. Decades ago there were fewer computers and most computers weren t connected to a network.
Today things are different. Criminals have found many ways to attack computer systems to
make money, and nation-states have found many ways to attack computer systems for their own reasons.
Attackers now have very strong motivations to perform attacks.
Yet many developers aren t told how to develop software that resists attacks, nor
how to protect their supply chains. Operations try to monitor and recover from
attacks, but their job is difficult due to inadequately secure software that doesn t
support those monitoring & recovery efforts well either. The results are terrible security.
Holger: Do you think reproducible builds are an important part in secure computing today already?
David: Yes, but first let s put things in context.
Today, when attackers exploit software vulnerabilities, they re primarily
exploiting unintentional vulnerabilities that were created by the software
developers. There are a lot of efforts to counter this:
Train & education developers in how to develop secure software.
The OpenSSF provides a free course on how to do that (full disclosure: I m the author).
Take that course or something like it!
Add tools to your CI pipeline to detect potential vulnerabilities. Yes, they have false
positives and false negatives, so you have to also use your brain but that just means you
need to be smart about using tools, instead of not using them.
Get projects & organizations to update the components they use,
since often the vulnerabilities are well-known publicly
(e.g., Equifax in 2017). Add some tools to your development process to warn you about
components with known vulnerabilities! GitHub & GitLab both provide tools to do this,
and there are many other tools.
When starting new projects, try to use memory-safe languages. On average 70% of the
vulnerabilities in Chrome and in Microsoft are from memory safety problems; using a memory-safe
language eliminates most of them.
We re just starting to get better at this, which is good. However, attackers always
try to attack the easiest target. As our deployed software has started to be hardened
against attack, attackers have dramatically increased their attacks
on the software supply chain (Sonatype found in 2022 that there s been a 742% increase year-over-year).
The software supply chain hasn t historically gotten much attention, making it the easy target.
There are simple supply chain attacks with simple solutions:
In almost every year the top attack has been typosquatting. In typo squatting,
an attacker creates packages with almost the right name. This is an easy attack to
counter developers just need to double-check the name of a package before adding it.
But we aren t warning developers enough about it!
For more information, see papers such as the Backstabber s Knife Collection.
Last year the top software supply chain attack was dependency confusion convincing
projects to use the wrong repo for a given package. There are simple solutions to this, such as
specifying the package source and/or requiring a cryptographic hash to match.
Some attacks involve takeovers of developer accounts. In almost all cases, these are
caused by stolen passwords. Using a multi-factor authentication (MFA) token eliminates
stolen password attacks, which is why several
repositories are starting to require MFA tokens in some cases.
Unfortunately, attackers know there are other lines of attack.
One of the most dangerous is subverted build systems, as demonstrated by
the subversion of SolarWinds Orion system. In a subverted build system,
developers can review the software source code all day and see no problem,
because there is no problem there. Instead, the process to convert source code
into the code people run, called the build system , is subverted by an attacker.
One solution for countering subverted build systems is to make the build systems harder
to attack. That s a good thing to do, but you can never be confident that it was good enough .
How can you be sure it s not subverted, if there s no way to know?
A stronger defense against subverted build systems is the idea of verified reproducible builds.
A build is reproducible if given the same source code, build environment and build instructions,
any party can recreate bit-by-bit identical copies of all specified artifacts.
A build is verified if multiple different parties verify that they get the same result for that situation.
When you have a verified reproducible build, either all the parties colluded
(and you could always double-check it yourself), or the build process isn t subverted.
There is one last turtle: What if the build system tools or machines are subverted themselves?
This is not a common attack today, but it s important to know if we can address them
when the time comes. The good news is that we can address this.
For some situations reproducible builds can also counter such attacks.
If there s a loop (that is, a compiler is used to generate itself), that s called the trusting trust attack,
and that is more challenging. Thankfully, the trusting trust attack has been known about for
decades and there are known solutions. The diverse double-compiling (DDC) process that
I explained in my PhD dissertation, as well as the bootstrappable builds process, can
both counter trusting trust attacks in the software space. So there is no reason to lose hope:
there is a bottom turtle , as it were.
Holger: Thankfully, this has all slowly started to change and supply chain issues are now widely discussed, as evident by efforts like
Securing the Software Supply Chain: Recommended Practices Guide for Developers
which you shared on our mailing list. In there, Reproducible Builds are mentioned as recommended advanced practice, which is both pretty cool (we ve come a long way!), but to me it also sounds like this will take another decade until it s become standard normal procedure. Do you agree on that timeline?
David: I don t think there will be any particular timeframe. Different projects and
ecosystems will move at different speeds. I wouldn t be surprised if it
took a decade or so for them to become relatively common there are
good reasons for that.
Today the most common kinds of attacks based on software
vulnerabilities still involve unintentional vulnerabilities in operational systems.
Attackers are starting to apply supply chain attacks, but the top such attacks
today are typosquatting (creating packages with similar names) and
dependency confusion) (convincing projects to download packages from the wrong
repositories).
Reproducible builds don t counter those kinds of attacks, they
counter subverted builds. It s important to eventually have verified
reproducible builds, but understandably other issues are currently getting
prioritized first.
That said, reproducible builds are important long term.
Many people are working on countering unintentional vulnerabilities
and the most common kinds of supply chain attacks.
As these other threats are countered, attackers will increasingly target
build systems. Attackers always go for the weakest link.
We will eventually need verified reproducible builds in many situations, and
it ll take a while to get build systems able to widely perform reproducible builds,
so we need to start that work now. That s true for anything where you know
you ll need it but it will take a long time to get ready you need to start now.
Holger: What are your suggestions to accelerate adoption?
David: Reproducible builds need to be:
Easy (ideally automatic). Tools need to be modified so that reproducible builds
are the default or at least easier to do.
Transparent to projects & potential users. Many projects have no idea that their results aren t
reproducible, and many potential users of the project don t know either.
That information needs to be obvious. I ve proposed that the OpenSSF
Dashboard SIG try to reproduce builds, for at least some packages, to make it
more obvious to everyone when a project isn t reproducible. I don t know if that
will happen in that particular case, but the point is to help people learn that information
as soon as possible.
Deployed.
Experiments are great, but experiments showing that a project could be reproducible
are inadequate. We need the projects that people use to be reproducible.
I think there s a snowball effect. Once many projects packages are reproducible,
it will be easier to convince other projects to make their packages reproducible.
I also think there should be some prioritization. If a package is in wide use
(e.g., part of minimum set of packages for a widely-used Linux distribution or
framework), its reproducibility should be a special focus. If a package is vital for
supporting some societally important critical infrastructure (e.g., running dams),
it should also be considered important. You can then work on the
ones that are less important over time.
Holger: How is the Best Practices Badge going? How many projects are participating and how many are missing?
David: It s going very well. You can see some automatically-generated statistics, showing we have over 5,000 projects, adding more than 1/day on average.
We have more than 900 projects that have earned at least the passing badge level.
Holger: How many of the projects participating in the Best Practices badge engaging with reproducible builds?
David: As of this writing there are 168 projects that report meeting the reproducible builds criterion.
That s a relatively small percentage of projects. However, note that this criterion (labelled build_reproducible)
is only required for the gold badge. It s not required for the passing or silver level badge.
Currently we ve been strategically focused on getting projects to at least earn a passing badge,
and less on earning silver or gold badges.
We would love for all projects to get earn a silver or gold badge, of course, but
our theory is that projects that can t even earn a passing badge present the most risk to their users.
That said, there are some projects we especially want to see implementing higher badge levels.
Those include projects that are very widely used, so that
vulnerabilities in them can impact many systems.
Examples of such projects include the Linux kernel and curl.
In addition, some projects are used within
systems where it s important to society that they not have serious security vulnerabilities.
Examples include projects used by
chemical manufacturers, financial systems and weapons.
We definitely encourage any of those kinds of projects to earn higher badge levels.
Holger: Many thanks for this interview, David, and for all of your work at the Linux Foundation and elsewhere!
For more information about the Reproducible Builds project, please see our website at
reproducible-builds.org. If you are interested in
ensuring the ongoing security of the software that underpins our civilisation
and wish to sponsor the Reproducible Builds project, please reach out to the
project by emailing
contact@reproducible-builds.org.
I've been ridiculously burned out for a while now but I'm taking the month off to recover and that's giving me an opportunity to catch up on a lot of stuff. This has included me actually writing some code to work with the Pluton in my Thinkpad Z13. I've learned some more stuff in the process, but based on everything I know I'd still say that in its current form Pluton isn't a threat to free software.
So, first up: by default on the Z13, Pluton is disabled. It's not obviously exposed to the OS at all, which also means there's no obvious mechanism for Microsoft to push out a firmware update to it via Windows Update. The Windows drivers that bind to Pluton don't load in this configuration. It's theoretically possible that there's some hidden mechanism to re-enable it at runtime, but that code doesn't seem to be in Windows at the moment. I'm reasonably confident that "Disabled" is pretty genuinely disabled.
Second, when enabled, Pluton exposes two separate devices. The first of these has an MSFT0101 identifier in ACPI, which is the ID used for a TPM 2 device. The Pluton TPM implementation doesn't work out of the box with existing TPM 2 drivers, though, because it uses a custom start method. TPM 2 devices commonly use something called a "Command Response Buffer" architecture, where a command is written into a buffer, the TPM is told to do a thing, and the response to the command ends up in another buffer. The mechanism to tell the TPM to do a thing varies, and an ACPI table exposed to the OS defines which of those various things should be used for a given TPM. Pluton systems have a mechanism that isn't defined in the existing version of the spec (1.3 rev 8 at the time of writing), so I had to spend a while staring hard at the Windows drivers to figure out how to implement it. The good news is that I now have a patch that successfully gets the existing Linux TPM driver code work correctly with the Pluton implementation.
The second device has an MSFT0200 identifier, and is entirely not a TPM. The Windows driver appears to be a relatively thin layer that simply takes commands from userland and passes them on to the chip - I haven't found any userland applications that make use of this, so it's tough to figure out what functionality is actually available. But what does seem pretty clear from the code I've looked at is that it's a component that only responds when it's asked - if the OS never sends it any commands, it's not able to do anything.
One key point from this recently published Microsoft doc is that the whole "Microsoft can update Pluton firmware" thing does just seem to be the ability for the OS to push new code to the chip at runtime. That means Microsoft can't arbitrarily push new firmware to the chip - the OS needs to be involved. This is unsurprising, but it's nice to see some stronger confirmation of that.
Anyway. tl;dr - Pluton can (now) be used as a regular TPM. Pluton also exposes some additional functionality which is not yet clear, but there's no obvious mechanism for it to compromise user privacy or restrict what users can run on a Free operating system. The Pluton firmware update mechanism appears to be OS mediated, so users who control their OS can simply choose not to opt in to that.